Quantcast
Channel: File Services and Storage forum
Viewing all 13580 articles
Browse latest View live

SMB witness client/server error

$
0
0

Hi, 

    I have been researching this SMB witness issue for quiet a while. hopefully someone here may shed some light or I guess I would have to open a case with Microsoft. Thank you in advance. 

*  error on SMB server (Witness Client Admin): Witness Client failed to unregister from witness Server xxxx for NetName zzzz with error  (The operation identifier is not valid) 

                                                                 Witness Client received error (The parameter is incorrect) from witness server xxx.xx.xx.xxx for Netname \\xxxxxxx

-----------------------------------------------------------------------------------------------------

* error on SMB client (SMBWitnessClient): Witness Client received error (The parameter is incorrect) from witness server xxx.xx.xx.xxx for NetName \\xxxxxxx

**********************************

What I have done so far: 

1. disabled SMB multi channel. We have 3 nics on each cluster node that only 1 nic is supposed to be used for smb traffic. I therefore tried to disable the smb multi path to see if it would fix this issue by not confusing SMB routing. 

2. I have done "adapters and bindings" setting the smb traffic nic to the top of the order

3. Increase the UDP package interval and threshold

Cluster /prop SameSubnetDelay=2000 
Cluster /prop SameSubnetThreshold=10
Cluster /prop CrossSubnetDelay=4000 
Cluster /prop CrossSubnetThreshold=10

So far I'm still getting the non stop smb errors. I would really appreciate it if anyone has a solution to this smb issue. 

thank you



Can't take ownership of files with take own get access diened

$
0
0

Good afternoon all,

I've some files where can't take the ownership of. I've tried every and searched allot of technet and the rest of google.

When is ue:

Takeown /f *.* /r /a /d y

i get like 4 succes messages and 16 INFO: Access is denied.

I need to copy/cut the files to new storage, can someone help me please??

I am a member of the adminsitrator group

NTFS permission removed when deleting a subfolder

$
0
0

Hello everyone,

I do have a very strange issue with my file server. Let me first describe the infrastructure.

OS: Windows Server 2016
Roles: File and Storage Services
Type: Member of a 2016 Domain

On the file server I do have to following structure/permission:

  • F:\
    • ANWDTest
      • ZZZ
        • DIMS
        • Wagenbuch

The NTFS permissions to those folders is like that:

  • ANWDTest
    Inheritance disabled
    CREATOR OWNER - Full control - Subfolders and files only
    SYSTEM - Full control - This folder, subfolders and files
    Administrators - Full control - This folder, subfolders and files
    L_NTFS_J_R - Read & execute - This folder only

  • ZZZ
    Inheritance enabled
    L_NTFS_J_ZZZ_R - Read & execute - This folder only

  • DIMS
    Inheritance enabled
    L_NTFS_J_ZZZ_DIMS_R - Read & execute - This folder, subfolders and files
    L_NTFS_J_ZZZ_DIMS_W - Modify - This folder, subfolders and files

  • Wagenbuch
    Inheritance enabled
    L_NTFS_J_ZZZ_Wagenbuch_R - Read & execute - This folder, subfolders and files
    L_NTFS_J_ZZZ_Wagenbuch_W - Modify - This folder, subfolders and files

So far I think this is nothing special, now here is my issue:

When I delete the "Wagenbuch" or the "DIMS" folder this does remove the group "L_NTFS_J_ZZZ_R" from the "ZZZ" folder AND does remove the group "L_NTFS_J_R" from the "ANWDTest" folder... and I do have absolutly no idea why this is happening.

Does anyone see an error in the setup or did face similar issues? I am totally lost here, even no idea where to start searching.. Google did also not help at all.

Thanks for the support!


UPDATE 1: To be sure that is not an issue of our file server - I did setup the same structure on an other 2016 server, and did face the same issue.

UPDATE 2: In the meantime I did the same setup on a 2012 R2 server and there is no issue at all, so this seems to be related to Server 2016.

UPDATE 3: I just did a test setup of the brand new 2019 server - I do have the exact same error as on server 2016.

DFS replication partners not in sync

$
0
0

Hello all,

I have about 100 servers in a DFS replication group, and it has become apparent that data is not replicating from the "master" share. Is there a way that I can mark the contents on one server as the valid and have that data replicate throughout the group?

thanks,

Chris

Moving DFSR drive from one server to another

$
0
0

I was wondering, I’m preparing to “upgrade” some virtual file servers by simply building new servers and swinging the data volumes over to them. The shares on the servers are currently DFSR enabled, and fully synced. My question is this:  Does anyone know if it is necessary to delete and re-create the replication group on the new server, or is all the relevant DFSR data stored on the data drive itself?

Keep in mind that after the switchover, the IP, servername and share names will all be the same as before.

Removing DFS Staging folders after removal of DFS Replication Group and target server.

$
0
0

I have two servers, Server A (W2K12R2) and Server B (W2K16).  Both are set up as file servers.  Files were replicated and now we are discontinuing the replication and Server B will go into the virtual dustbin (The replication was set up incorrectly).

The source server was Server A and it contains staging folders under DfsrPrivate in several locations.  I want to remove them to gain disk space.  Since will not re-instate replication, I do not see the harm in removing the staging folders.  The server will be subjected to a P2V and the hardware retired.  Then I can all kinds of disk space.

However everything I read about the subject says no, but the context is always quite different from my setup.  Comments welcome.

Share Dirve Audit log

$
0
0

Dear Folks,

I have create the Windows fail over cluster and add file server role.

Now I want to enable audit log for perticular on share drive to monitor who's deleting or adding files on that share drive.

Auditing is already enabled but file delete log it's not generated, Please suggest if I doing or missing any setting.

Thanks

Yogesh.

Fileserver - Diskmanagement and Partition - best practice

$
0
0

Hello all,

Right now, we are running a fileserver (Windows Server 2016) with a really big data-disk (D:\) of round about 6,5 TB. This server is virtualized by VMware and the storage is located in the SAN.

Our storage administrator provides us one big LUN and we have created one big vmdk on this LUN. So right now, this server has two vmdk-disks (C:\ and disk D:\). On Disk D:\ we have created one Share, which is mapped by all endusers.

Now we had a discussion with our storage administrator, because he wants to rebuild the whole SAN/LUN environment. Instead of one big LUN, he will provide use 6 or 7 smaller LUNs (with round about 1TB) - because from his site, small LUNs are more comfortable and he can manage them (move) better.

What is the best practice for diskmanagement and partition designing in such a case?

Should we use all the LUNs as different disks (vmdks) and build one partition over all disks? (problem: when there is an issue with one disk, then the whole partition has a failure).

Should we use all the LUNs as different disks and build a partition on each disk? (problem: when using more than one partition, we have to create a share on each partition and therefore the users has to be mapped to all shares - normally we would like to have only one share)

Does anyone know what is the best solution?


creating an all-flash s2d setup using tiered storage and NOT cache

$
0
0

I'm trying to create a 2-node s2d hyperconvered setup on 2019 using nvme+SSDs, but because of my limited capacity, I do NOT want to lose the NVMe capacity to the cache and instead want to create a tiered storage environment.

Since I"m only a 2-node cluster, I am going to enable nested resiliency per https://docs.microsoft.com/en-us/windows-server/storage/storage-spaces/nested-resiliency.

I did the standard enable-s2d so the nvme drives became journal usage, but I have since changed them to autoselect.  

What believe I could change the SSDs to be of mediatype = HDD, but I assume this is not really the correct way.  Likewise, I found that I could set the CACHE volume to be from a specific model, but I can't figure out an equivalent function on the capacity drives.  Ideally, what I want to do is create some nested mirror accelerated parity drives where I could use the NVMe drives for mirror data and the ssds for parity.  

Between the two servers, I have 8x nvme drives and 18 SSD drives which are all roughly the same size, so beyond the fact that 4 doesn't divide into 9 evenly, the loss of the nvme from the capacity of each host is simply too much given the need for nested resiliency. 

So...  is there any good way to create a storage tier that is linked to something like drive model?

You don't currently have permission to access this folder (Access deny)

$
0
0

Hi,

I have two file servers with DFS role.

In source server (STOR01) - i can access to folder and disks. On new detestation (STOR02) server, I have a problem with same folders and disks, using same account (administrative).

If i try open some folders with custom permission, I receive message "You don't currently have permission to access this folder".

After press "Continue", my account add to that folders and I can access it.

And custom permissions are set for disk, I receive access deny.

Permission for disk is:

Same permissions on STOR01 server, but there no problems with access. Also from STOR01 server I can access this disk like \\stor02\g$, but from stor02 I receive, -  resource is not available error.

My account is domain admin and a member of stor02 local administrator group.

Please help me understand, what is the reason

Server 2012 R2-DFS Replication not working one direction-Insufficient Disk Space error but it's not

$
0
0

I have seen several posts on this issue and possible solutions, so far nothing helped in my case.

We have two servers S1 (Primary) and S2 connected in LAN. We have users folder in Replication Group. Replication Group's bandwidth is full. From S1 to S2, its working fine and backlog is very low which is normal. But S2 to S1, its stuck at 7779, an hour ago it was 7780. I have checked DFSR event logs, DFSR diagnostic reports etc.

In event log and also in the report, for S2 server, there is an error:

DFS Replication unable to replicate files for replicated folder Users due to insufficent disk space.  

  Affected replicated folders: Users 

  Description: The DFS Replication service was unable to replicate one or more files because adequate free space to replicate the files was not available for staging folder E:\Users\DfsrPrivate\Staging. Event ID: 4502 
  Last occurred: Wednesday, January 30, 2019 at 3:34:46 PM (GMT10:00) 

  Suggested action: Determine whether the volume reported hosts the replicated folder, the staging folder or both as in default configuration. See additional information about disk space under the informational section in the table titled "Current used and free disk space on volumes where replicated folders are stored". Ensure that enough free space is available on the volume for replication to proceed or move the associated replicated folder or staging folder to a different volume that has more free space.

Now, our S2-E: drive is 42.1TB and 28.3TB is free, S1-E: drive also has similar space. Users is the root shared-folder that contact individual users folders. Usually users files are not that big.

Users folder Stating Folder size has neber been a problem as I allocated sufficient space (200GB) for both servers. When I checked staging folder current size, on S2 its 4.49GB only, on S1 its 146GB.

When I run " Dfsrdiag.exe ReplicationState" on S2, it gives me this:

dfsrdiag.exe ReplicationState /member:S2

  Total number of inbound updates scheduled: 88

Summary

  Active inbound connections: 1
  Updates received: 120

  Active outbound connections: 0
  Updates sent out: 0

Operation Succeeded
For S1,
dfsrdiag.exe ReplicationState /member:S1

  Total number of outbound updates being served: 15

Summary

  Active inbound connections: 0
  Updates received: 0

  Active outbound connections: 1
  Updates sent out: 15

Operation Succeeded
Just a week ago, S2's replication service got into issue and it had to rebuild its database, then did an initial replication that took around 2-3 days to complete. Since then replication service is running fine. The most recent event log that catches my eye after S2 was last rebooted is drive E: free-space issue (Event ID 4502). Right before that there is another entry 5014
Text
The DFS Replication service is stopping communication with partner S1 for replication group RG01 due to an error. The service will retry the connection periodically. 
Additional Information: 
Error: 1818 (The remote procedure call was cancelled.) 
Connection ID: 257B85DC-8C09-42EF-9727-4176A2F88527 
Replication Group ID: 158FE127-1927-463F-88CC-70E6B0014656
This is I have, and I am in a loop in finding our out what is responsible for S2 not replicating or very very slow replicating to S1. Any advice/help will be appreciated. Thank you.

Share Dirve Audit log

$
0
0

Dear Folks,

I have create the Windows fail over cluster and add file server role.

Now I want to enable audit log for perticular on share drive to monitor who's deleting or adding files on that share drive.

Auditing is already enabled but file delete log it's not generated, Please suggest if I doing or missing any setting.

Thanks

Yogesh.

Failed to access Work Folder from Shadow Copy Volume after windows 1803

$
0
0
My Avamar backup failed to access Work Folder in windows 10 (version is 1803. 1809 has the same issue).

Work Folder can be accessed smoothly in normal environment. Only when I create Shadow Copy Volume, backup process(avtar) failed to touch it by following error.

And if I launch backup process in command line, the backup work. The only different of processes for launched user is SYSTEM or administrator.

So looks like process in administrator can access Work Folder from VSS.

process in system can’t access Work Folder from VSS.

VSS path:
\\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy11\Users\fuc4\Work Folders
API:
FindFirstFile
GetFileAttributesExW
CreateFileW
Errors:
19 : The media is write protected.

As I know Windows limited some system process to access network.

But for Work Folder, the behavior has never been this before window 1803.

Is there any body know how to resolve this kind of issue?


Creating a virtual disk for SOFS

$
0
0

I am struggling to create a virtual disk for SOFS on server 2016

I have 3 Jbods, each with 3 SSDs and i want to create a 3 column 2 data copy.

However every time i try to create this it fails. It tells me i have the wrong disk setup for resiliency type i want. However why would 3 SSDs per JBOD work?

I can create a mirror with 1,2,3 or 4 columns as long as i dont set enclosure awareness. However as soon as i try to enable enclosure awareness it gives me the error message.

Any help with how this could be set up or what i am doing wrong is appreciated.

Server 2016 Previous Versions last few days not visible

$
0
0

Hi All,

On several 2016 file servers we have seen that the last few days of the previous versions are not visible. In the shadow Copies tab all snapshots are visible, but in the Previous Versions tab the last few days are not visible. Someone any idea?

Good to know, we have maximized the number of VSS snapshot at 512 via the well-known DWORD:

[HKEY_LOCAL_MACHINE \ SYSTEM \ CurrentControlSet \ Services \ VSS \ Settings]
"MaxShadowCopies" = dword: 00000200

And we have configured the VSS snapshots on a separate disk.

Thanks!


Top solution for troubleshooting common issue on S2D

$
0
0

Storage Spaces Direct uses industry-standard servers with local-attached drives to create highly available, highly scalable software-defined storage at a fraction of the cost of traditional SAN or NAS arrays. Its converged or hyper-converged architecture radically simplifies procurement and deployment, while features such as caching, storage tiers, and erasure coding, together with the latest hardware innovations such as RDMA networking and NVMe drives, deliver unrivaled efficiency and performance. 

 

In this section, you will learn the states that can be invaluable when trying to troubleshoot various issues, way to troubleshoot your Storage Spaces Direct deployment and frequently asked questions related to Storage Spaces Direct. 

 


Please remember to mark the replies as answers if they help.
If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com.

When we turn off old AD server one user does not have access to files on file server

$
0
0

We have an old AD server 2003 that we are retiring for a Windows server 2016. When we turn off the old 2003 server one user  not get to files on the file server (another 2003 server) 

They can ping the file server but does not have permissions to access it even though they do have permissions.



GPO that is used to create folder on shared drive and desktop icon is not working anymore

$
0
0

I have a GPO that when a user logs on to a domain PC a folder named "%USERNAME% is created and a shortcut is placed on the desktop referring to that location.

This was working file but we had an issue with some security permissions and now the shortcut is not being created and neither is the folder on the shared drive. I have verified that the new users can write to the shared folder and create a new folder manually. I have system and Domain users having full control permissions of the shared folder and have attached the settings for the GPO. I have also checked GPRESULT /R and ensured that the GPO is being applied to the machine.


Jeremy Robertson Network Admin


Deduplication Problems | Garbage Collection hangs on 0%

$
0
0

Hello, 

Since a couple of months we have a problem with deduplication on our live production.
We have two servers (FS01 and FS02). We use DFS to replicate files between the servers. 
One server located at the office and another server located on the seccond office. 
We gott a backlog of DFS from 1 000 000 + so we decided to move the server on the same network to sync the files. 

When we start garbage collection on FS02 it works well and its finished after 4 - 6 hours. 
When we start garbage collection manual on FS01 with high priority the job start. 
Now after 3 days when i check the status its still hangs on 0%.
How we can solve this problem ? 
We already fully update the server and restarted before we start the garbage collection.

If you need any more information let me know. 



Enabling EFS on FileStream Folders

$
0
0

Please help me on below:

Below is my Environment:

Windows Server 2012 R2 Standard

SQL Server 2014 SP2 GDR

Availability Groups 2014 with 3 Node (2 Sync and 1 BCP async)

I have 7 Databases with FileStream enabled.

Each 7 DB Filestream data folders is around 500 GB.

Due to security policy I need to enable both Transparent Data Encryption (TDE for structured data) and Encrypting File System (EFS on FileStream folders).

While enabling EFS on FileStream folders I am getting below error.

(NOTE : I am doing it by turning the SQL Servers offline and before taking services offline , I am failing over AG to next available Synchronized AG Node)

I cannot Ignore the error and move on , because of which please advise on below:

I even tried turn off anti virus and Firewall , still no luck

  1. What is the root cause of this issue and how can I perfectly enable EFS for the 7 DBs FS Folders .
  2. Can I try enabling the EFS on multiple DB FS Folders keeping in mind their sizes (500 GB each) ?

Kindly advise. Thanks


Best Regards,SQLBoy


Viewing all 13580 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>