Quantcast
Channel: File Services and Storage forum
Viewing all 13580 articles
Browse latest View live

bad DFSR-Perfomance

$
0
0

Hi,

I have 2 fileservers (1x 2016 and 1x 2012 R2) connected via 1 Gbit/s and setup with DFS-Replication. The share has 13 TB and the stage-size is 1 TB (on each machine). I've setup replication 5 weeks ago and now there are only 3TB replicated. There are no errors in DFSR-Eventlog. Once per week I have Warning 4202 (staging space above the high watermark) but this should be no problem as it is only once per week.

Whats wrong? I'm familar with DFS-Replication and have checked https://blogs.technet.microsoft.com/askds/2007/10/05/top-10-common-causes-of-slow-replication-with-dfsr/

Regards
Miranda


iscsi session disappears after restarting server

$
0
0

Hi,

I'm using an iScsi storage device that is connected to the network via two interfaces.   One interface is using ip address 192.168.101.8 and the second one is 172.30.0.250.  The second one is a bonded interface and I do not want to make an iscsi connection via this particular interface.  So I've setup an iscsi connection on my Windows Server 2012 R2 with the target being 192.168.101.8.  This works works fine but as soon as I restart my server the target gets set back to 172.30.0.250.  

When I'm in iScsi Initiator in the Target tab it shows that the target is connected.  When I click Properties I notice that the Identifier that I created is gone and replaced with the old Identifier again.  So basically the session that I manually added is removed and restart, and the original session with the wrong target is restored.

Can I make sure that is keeps this session even after restarting the server ?

Thanks

Joeri

Missing VSS System Writer and CAPI2 error in Event Log

$
0
0
Hello,

I'm having problems with making full system backup of Windows 2008 R2 x64. It looks like this is related to missing VSS System Writer. When I'm running command "vssadmin list writers" there is no System Writer in writers list and in event log CAPI2 error (event ID 513) is showing with this description:
Cryptographic Services failed while processing the OnIdentity() call in the System Writer Object.

Details:

TraverseDir : Unable to push subdirectory.

System Error:

Unspecified error
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
<System>
<Provider Name="Microsoft-Windows-CAPI2" Guid="{5bbca4a8-b209-48dc-a8c7-b23d3e5216fb}" EventSourceName="Microsoft-Windows-CAPI2" />
<EventID Qualifiers="0">513</EventID>
<Version>0</Version>
<Level>2</Level>
<Task>0</Task>
<Opcode>0</Opcode>
<Keywords>0x8080000000000000</Keywords>
<TimeCreated SystemTime="2010-03-14T01:06:35.639125000Z" />
<EventRecordID>207975</EventRecordID>
<Correlation />
<Execution ProcessID="968" ThreadID="11588" />
<Channel>Application</Channel>
<Computer>System3</Computer>
<Security />
</System>
<EventData>
<Data>Details: TraverseDir : Unable to push subdirectory. System Error: Unspecified error</Data>
</EventData>
</Event>
any idea what could be wrong?

Thanks in advance

Howto migrate Work Folders server?

$
0
0

Hi,

I have some servers running 2012 R2 with active Work Folders feature. They will be replaced by 2016 machines. Is there some howto regarding the migration of Work Folders feature?

The sync shares are created with file access only for the users itself. Work folders URL is distributed by GPO to the clients and setup is enforced. Clients are Windows 10 1709 and 1803. 

Before Windows 10 1803 the clients had all the data and changing the sync share was easy. Clients were just resynchronizing their files to the new, empty share. Now with the new features, most files stay on the sync share server and only a few files are offline available. 

So how do I handle this? 

Thanks, Ingo

Max Dedup Volumes simultaneously.

$
0
0

Hello,

I have 3 volumes of 4TB each.  Each of them have a scheduled dedup job that runs at 6am.  When the schedule job starts at 6am they all run simultaneously.  But if I have to run a manual dedup job on a newly created 4th volume while the initial 3 volumes are in process of deduping,  the  job of the 4th volume goes into the queued mode.  It stays in queue until any of the previous volumes have been completed.  Now if i add the newly created 4th volume as a schedule dedupjob then all 4 volumes run at the same time without issues. Does any one know how to force start a dedup job without it being queued?

Thank you.


Ishan

ID Management for Unix for SMB and NFS cross protocol - Alternative Mapping Solution?

$
0
0

Per link, it says that IDMU is deprecated in Win Server 2016 and up. Is there a simple alternative solution to map Unix ID's with Windows ID's so that I can share my files across both SMB and NFS? Without a mapping store, file permissions would not correctly be honored between users unix users and windows users.

https://blogs.technet.microsoft.com/activedirectoryua/2016/02/09/identity-management-for-unix-idmu-is-deprecated-in-windows-server/

According to the blog post (and several other sources), it mentions that the UID/GID entries will still exist moving froward. But there's no mention of how to setup up the mapping alternatives. It also alludes to possibly using the UID/GID attributes with NFS Server for ID mapping, but I'm trying to find steps on performing this. 

"For example, you may require the RFC 2307 attributes in combination with Network File System (NFS) Server (which does not require NIS Server role to be installed on Windows Server) to map the identity."

Any help for steps on performing this would be greatly appreciated.

Windows Server 2016 Domain admin permission issue

$
0
0

Hi all,

I'm having a weird issue in Server 2016 in regard to share and NTFS permissions.

When I create a folder on my file server (D: partition) and I change nothing, I can access all the folders and files in it. If, however, I disable inheritance to modify the permissions and change the share and NTFS permissions, I get an error (as local and domain admin) that I can not access the folder, I get a prompt and when I choose continue it adds the specific account (which is either local or domain admin) to the NTFS permissions. The share and NTFS permissions are as follows:

Share:
User group - Modify
Domain admins - Full Control

NTFS:
User group - Modify
Administrators - Full Control*
Domain Admins - Full control

* It doesn't matter whether I only give permissions to Administrators (of which domain admins is a member) or Domain Admins when I'm logged into a domain admin account.

The ownership of the folder does not influence anything as well, even when my account owns the folder I still get the prompt. When I change the the owner of the folder to, for instance the domain admins, I'm all of a sudden kicked out of it altogether (even though my account is a domain admin).

Turning off UAC does not change anything either. Neither does re-adding the server to the domain, nor does formatting the harddisk (as the issue can be recreated on multiple drives).

My other server 2016, in the same domain, also has this issue. Yet another server, also a Server 2016, in another domain which I'm administrator of doesn't have this issue at all. I have even recreated the domain in order to check whether the issue is domain related, this turns out not to be the case.
Does anyone else have similar problems with NTFS permissions not being applied properly? Is this a known bug in 2016 after a specific update?

If anyone has any tips, I'd very much appreciate the help.

Kind regards,

Ranko

Edit: I've tried SFC as well, did not help. Clean install of Windows did not help either (it was being updated right after the installation, so I can not rule out a bug due to an update).

Permissions not Reflecting post copying to CIFS Share

$
0
0

We are trying to copy files from Win 2008 R2 to CIFS share which is mounted on HP StoreOnce Device which is domain joined

We are using Robocopy as well RichCopy with permission transfer options, following command was used

robocopy.exe$Source $Destination  /E /ZB /COPY:DATSOU /SEC /R:3 /W:3 /MT /TEE

Files are getting copied, but permissions are not reflecting.

Same command used to copy to another drive in same server, copying is successful with permissions.

Any help will be appreciated!!


Event ID 12293 & 8193 VSS Fail Server 2008r2

$
0
0

I continue to receive 12293 & 8193 errors in the event log everytime Backup Exec 2010r3 runs on our 2008r2 server. Volume shadow copy runs fine without error. When the backup software runs and VSS is accessed to backup the system state it fails. Any help with this issue would be appreciated.

Event Log Messages

Log Name:      Application
Source:        VSS
Date:          4/11/2012 9:49:55 PM
Event ID:      12293
Task Category: None
Level:         Error
Keywords:      Classic
User:          N/A
Computer:      constructserv.corp.ontargetservices.com
Description:
Volume Shadow Copy Service error: Error calling a routine on a Shadow Copy Provider {b5946137-7b9f-4925-af80-51abd60b20d5}. Routine details EndPrepareSnapshots({aaba5aba-6109-41af-a650-6864045cb0dc}) [hr = 0x80042302, A Volume Shadow Copy Service component encountered an unexpected error.
Check the Application event log for more information.
].

Operation:
   Executing Asynchronous Operation

Context:
   Current State: DoSnapshotSet
Event Xml:
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
  <System>
    <Provider Name="VSS" />
    <EventID Qualifiers="0">12293</EventID>
    <Level>2</Level>
    <Task>0</Task>
    <Keywords>0x80000000000000</Keywords>
    <TimeCreated SystemTime="2012-04-12T01:49:55.000000000Z" />
    <EventRecordID>34643</EventRecordID>
    <Channel>Application</Channel>
    <Computer>constructserv.corp.ontargetservices.com</Computer>
    <Security />
  </System>
  <EventData>
    <Data>{b5946137-7b9f-4925-af80-51abd60b20d5}</Data>
    <Data>EndPrepareSnapshots({aaba5aba-6109-41af-a650-6864045cb0dc})</Data>
    <Data>0x80042302, A Volume Shadow Copy Service component encountered an unexpected error.
Check the Application event log for more information.
</Data>
    <Data>

Operation:
   Executing Asynchronous Operation

Context:
   Current State: DoSnapshotSet</Data>
    <Binary>2D20436F64653A20434F52534E50534330303030313632342D2043616C6C3A20434F52534E50534330303030313630352D205049443A202030303030323333322D205449443A202030303030353136382D20434D443A2020433A5C57696E646F77735C73797374656D33325C76737376632E6578652020202D20557365723A204E616D653A204E5420415554484F524954595C53595354454D2C205349443A532D312D352D313820</Binary>
  </EventData>
</Event>


Log Name:      Application
Source:        VSS
Date:          4/11/2012 9:49:55 PM
Event ID:      8193
Task Category: None
Level:         Error
Keywords:      Classic
User:          N/A
Computer:      constructserv.corp.ontargetservices.com
Description:
Volume Shadow Copy Service error: Unexpected error calling routine Cannot find anymore diff area candidates for volume\\?\Volume{74408b43-8402-11df-97b6-806e6f6e6963}\ [0].  hr = 0x8000ffff, Catastrophic failure
.

Operation:
   Automatically choosing a diff-area volume
   Processing EndPrepareSnapshots

Context:
   Volume Name: \\?\Volume{74408b43-8402-11df-97b6-806e6f6e6963}\
   Execution Context: System Provider
Event Xml:
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
  <System>
    <Provider Name="VSS" />
    <EventID Qualifiers="0">8193</EventID>
    <Level>2</Level>
    <Task>0</Task>
    <Keywords>0x80000000000000</Keywords>
    <TimeCreated SystemTime="2012-04-12T01:49:55.000000000Z" />
    <EventRecordID>34642</EventRecordID>
    <Channel>Application</Channel>
    <Computer>constructserv.corp.ontargetservices.com</Computer>
    <Security />
  </System>
  <EventData>
    <Data>Cannot find anymore diff area candidates for volume\\?\Volume{74408b43-8402-11df-97b6-806e6f6e6963}\ [0]</Data>
    <Data>0x8000ffff, Catastrophic failure
</Data>
    <Data>

Operation:
   Automatically choosing a diff-area volume
   Processing EndPrepareSnapshots

Context:
   Volume Name: \\?\Volume{74408b43-8402-11df-97b6-806e6f6e6963}\
   Execution Context: System Provider</Data>
    <Binary>2D20436F64653A20535052414C4C4F4330303030313137342D2043616C6C3A20535052414C4C4F4330303030303739302D205049443A202030303030323130302D205449443A202030303030373031322D20434D443A2020433A5C57696E646F77735C53797374656D33325C737663686F73742E657865202D6B2073777072762D20557365723A204E616D653A204E5420415554484F524954595C53595354454D2C205349443A532D312D352D313820</Binary>
  </EventData>
</Event>


Storage Spaces Direct (S2D) How to create more than one pool

$
0
0

Would someone please provide detailed instructions, or a link to instructions, on how to create more than one Pool in S2D?

I have seen many requests, and responses, that it can be done, but no direction on how.

Thanks,

Todd



Todd Hunter

2012R2 Errors Responding to SMB Connections

$
0
0

Hello everyone!I'm going crazy because of this problem.
I have a virtual 2012R2 file server on a vmare host that randomly has problems with shared folders.We have just redone the network and there are no particular problems on the network, I do not think this is the cause of the problem.

In the system registry Microsoft-Windows-SMBServer / Operational I found the error that I report and the timing coincides perfectly with user reports.

can you help me to understand what may be the cause of the problem or give me some suggestions please?

Reopening failed.

Client name: \\10.2.0.62
Client address: 10.2.0.62:62961
Username: YYY\xxx
Session ID: 0x4C0354000025
Share name: Documents
File name: Swing\Report Cells\DBCELLE.ldb
Recovery key: {00000000-0000-0000-0000-000000000000}
Status: The object name can not be found.(0xC0000034)
RKF Status: STATUS_SUCCESS (0x0)
Durable: false
Resilient: false
Permanent: false
Reason: Reconnect durable file

Additional information:

The client attempted to reopen an available handle continuously, but the attempt failed.This usually indicates a network problem or with the underlying file being reopened.


Server 2016 Work Folders and Windows 10 1803 Sync Question

$
0
0

We've just deployed Work Folders in our organisation and so far we're loving it. It solves a lot of issues and enhances our user's experience.

I have a question about .ini files. 

We have essentially created a work folder on their laptops containing C:\Users\Username\Work Folders\...

  • Documents
  • Desktop
  • Downloads

We've used GP to redirect their associated folders to this new location using folder redirection. This allows remote workers to save and open files without the files going over the WAN everytime as we've set it to always be available whilst offline. 

One big problem we have at the moment is with the sync icons. When I look at File Explorer, it'll show Desktop, Downloads and Documents with a BLUE Sync Arrows. If I click on those folders, everything is green ticks. This simple GUI glitch throws the user into thinking it's not synchronising correctly. My understanding of this behaviour is a result of the desktop.ini file. Work Folders ignore .ini .tmp files. So if a user has those, then the top level folder shows as continuously syncing, even though it's not. 

Now, I'm new to this feature and as far as I understand, it has been about since Windows Server 2012 R2. Does this always happen? Is it normal behaviour to show the top level folders with the blue sync arrows, even though everything inside has synced or is it a relatively new GUI bug that affects Windows 10 1803?

You can replicate this behaviour simply by adding a .ini file inside a work folder. The top-level folder will show the syncing icons. The sync control panel will show green - syncronised. 

Does anybody have a workaround so I don't throw the users experience off? I'm getting support calls because they think it' not syncing.

Thank you

DFS Namespace on Azure VM and Azure Client VPN

$
0
0

Hi All,

I've created a new DFS namespace (\\company.local\Public) on Windows Server 2012 R2 in Microsoft Azure. As a reference I have used the following blog post: http://clemmblog.azurewebsites.net/high-available-file-share-in-windows-azure-using-dfs/

The next step was to connect a (domain joined) laptop to the Azure virtual network using the Azure VPN client. After connecting I can browse to the file servers and the shares that are present on the file servers but when I try to browse to the shares in the DFS namespace (\\company.local\Public\Data) I get the following error:

But when I ping the namespace (company.local) I get a reply.

 

I can also use Remote Desktop to connect to servers in my environment with the Azure VPN client active, it all falls apart when I try to browse to the dfs share. At the moment we're kinda lost for ideas so any hints or tips are welcome :)

P.S. I have configured DFSN to use the FQDN for use with workgroup machines.

The DFS namespaces service Failed

$
0
0

Good afternoon,

I have a few users who get this error randomly, can be multiple times a day can be every other day,

"The DFS Namespaces service failed to initialise the shared folder that hosts the namespace root. Shared Folder: DFS

These are drives mapped by Group Policy.

Restarting does bring them back.

I had a constant ping test running from one of the machines for 2 days to the dfs server both ip and hostname

showed no disconnection / no packletloss yet the problem here persisted 

after it happened i did a flush dns and then did a dns lookup on the dfs server that then cached the the new dns records

and this still didnt resolve the issue, still not able to access the drive

the actual error the user gets is

Windows cannot access -

Check the spelling of the name otherwise, there might be a problem with your network. To try to identify and resolve network problems, click diagnose. 

Any ideas?

Users machine running Windows 10

DFS server windows 2012 r2

New 2016 File Server causing no "File Locked for Editing by user..." message in Word/Excel.

$
0
0

Hello all!  I have an odd issue I cannot put my finger on.  We recently replaced a Server 2008 Standard File Server with a Windows 2016 Datacenter version.  I have DFS setup for a number of shares.  My employees who are on site connecting to this new server for the share have informed me that instead of getting the usual "This file is locked for editing by..." error when someone is working in an Office type file, they are now getting a "Sorry, we couldn't open 'file path/name'" information box followed by a "Microsoft Excel cannot access the file 'file path/name'. There are several possible reasons:" warning box...

If I force a pair of users to connect to my alternate server that's off-site via DFS setting at the client machine, and have each open say an Excel file, behavior is normal in that one can open the file, and the other sees the information box that the file is locked for editing and gets the option to open as Read Only.  The alternate server is running Server 2008 R2 Enterprise.

I'm assuming something has changed in the stand-up process of Server 2016 for DFS/File Share from previous versions that I'm missing.  Any topic I've read from searching about this issue points to Oplock settings, but these were never configured on any other server.  Thoughts?


Work Folders: 0x8002802b Element not found

$
0
0

Were experiencing some issues with Work Folders on clusters today after moving VM from one host to another. 

We have 2 servers in the cluster with the role running on one node. 

We had the current configuration for 2 weeks now without any problems, after moving node 2 to another host (their both on separate hosts) we had some strange issues that proved to be caused by different versions of VSphere on the hosts. During this fault configuration we had critical errors when trying to fail the role from one node to another, by taking both nodes down and starting one of them it would work. This was obviously because of the different versions of VSphere that has been resolved. (ESXi 5.1 and 5.5)

Now we receive error when failing the role to node 1 on clients:

Sync failed. Work Folders path: C:\Users\xx\Work Folders; Error: (0x8002802b) Element not found.

When the role is on node 2 the client receive this error:

 Error code: An unexpected error occurred. (‪0x80041289‬)

Our environment is almost configured according to: 

http://blogs.technet.com/b/filecab/archive/2013/11/06/work-folders-on-clusters.aspx

We have two disks, one Quorum Witness and one shared storage between the servers.


Any help would be appreciated. If further information is needed please request.

Volume Deduplication

$
0
0

I have a virtual file server in my environment that has a drive filling up.  My usual process is to run WinDirStat to see what can be cleaned up before provisioning more storage. 

To my dismay - I found out that data dedupe is turned on for this volume!!  I can't actually see which users (this is a home directory share) are actually consuming the most data.  Is there anyway to tell when this was turned on and by who?  Is this ever auto enabled?

When I look in the server manager I see that there are "Deduplication Savings"; however, file explorer shows the disk at 80% capacity with "chunk store" taking up the majority of the actual space within a hidden System Volume Information folder.

I cloned the file server.  Next I disabled dedupe on the drive within server manager.  Next I ran the powershell comand:

Start-DedupJob -Volume"E:" -Type Unoptimization

This command causes the server to blue screen.

Any suggestions?


JLC

Server Service and SMB

$
0
0

Does SMB1 and SMB2 require the Server service to function?  In other words without Server service turned on any SMB will not work to the server?

could not create storage pool failed, the drive cannot find the sector requested

$
0
0

the server is a windows 2016 VM hosted on a ESX5.5 

1. added a new raw disk to the VM

2. From the disk managemen console, the disk is initiated and formatted

3. From the server manger console, trying to create a storage pool, it fails with the below error ... 



Kindly assist. I have tried to replace the disk altogether, change path, re-format,... 

bad DFSR-Perfomance

$
0
0

Hi,

I have 2 fileservers (1x 2016 and 1x 2012 R2) connected via 1 Gbit/s and setup with DFS-Replication. The share has 13 TB and the stage-size is 1 TB (on each machine). I've setup replication 5 weeks ago and now there are only 3TB replicated. There are no errors in DFSR-Eventlog. Once per week I have Warning 4202 (staging space above the high watermark) but this should be no problem as it is only once per week.

Whats wrong? I'm familar with DFS-Replication and have checked https://blogs.technet.microsoft.com/askds/2007/10/05/top-10-common-causes-of-slow-replication-with-dfsr/

Regards
Miranda

Viewing all 13580 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>