Server A has a shared folder (GPShare) that I can't browse
\\ServerA\GPShare from itself. I receive the following message:
"Network Error: Windows cannot access \\ShareA\GPShare. You do not have permission to access\\ServerA\GPShare. Contact your network administrator to request access."
However, if I log into Server B, and browse to the same UNC path, I am able to access it without problem.
atm i am quite desperate. Not cause nothing works - just the "way" it works is off any explanation I found around ISCSI-based Storage Spaces and MPIO.
Grab something to eat while reading this, it will be a long post, because I cannot "narrow down" the actual problem to "something" - so i'll start from scratch with the Hardware Layout, passing some Mountains of special Thoughts, until
finally arriving at the Performance-Valley.
---
The Goal
I'm running a small business over here, 7 employees so far. We have a heavy dependency on our data (which is arround 9 TB raw) - No data - no possibility to work. A single week without acces would have a huge financial impact. So, ofc the goal was to
design a Failover-Cluster that is reliable - and cheap. (We cannot afford solutions around 50.000 bucks)
We will for sure upgrade to "traditional" solutions in the future, but for now we have to get the "best" out of the hardware we can afford.
The Hardware
We are running 2 Cluster-Nodes, both having an internal triple-mirror-sas-based-storage-pool, which can deliver arround 800 MB/s of read-performance. With local pools you cannot build a HA-Cluster, so we added 3 NAS (as external Storage) accessed via ISCSI
to act as "Cluster shared volumes". (The local pools are dedicated to Backups now.)
Each Node uses three 1 GB/s Connections as well as each NAS.
Due to monetary reasons the NAS them Self are not HA (No redundant PSU) - so we have to be able to sustain the failure of a complete NAS.
The Storage Layout
Therefore, our best-payed but cost-conscious guys (that means "me") figured out the following to be the cheapest, but "safest" layout for our cluster:
- Each NAS has 4 Disk-Bays.
- One Slot per NAS is dedicated to an SSD.
- Three Slots per NAS are running a WD-Red 3.5 Harddisk.
To achieve the goal of beeing able to loose a whole device, the following layout has been choosen. (The Cluster utilizes the ISCSI-Volumes of the NAS' to form a Storage Pool) :
Image may be NSFW. Clik here to view.
So, there are two pools on the cluster, one beeing the SSD-Pool, one beeing the "HDD-Pool", made of 6+6+6+6+6+8 TB Disks. (Each disk - a total of 3 disks per NAS - is exposed as a single LUN within the SAME ISCSI-Target
per NAS- if this might be important.)
(Grouping 2 disks for the HDD-Pool into a Raid-0 ensures that there are "2 disks" per NAS, so a Triple-Mirrored-Pool can handle the loss of a nas)
ISCSI - MPIO
On every cluster node I setup the required ISCSI-Connections to each NAS. (1 NAS = 1 ISCSI-Target, but 3 LUNs).
Each Nas is connected with three 1 GBit-Nics as well. Each Cluster-Node has three 1 Gbit-Nics on the respective subnet to utilize MPIO) (3 different subnets, 1 nic per device per subnet)
The Issues Thx for still reading this! There are several Issues with this "setup" which cannot understand:
Redirected Access
I know, what redirected access means to clusters, i know how to utilize their benefits and I know WHEN a clustershould switch to redirected access.
However my cluster seems "stuck" to redirected access even if not a single vhd is reporting redirected access... :
- Copying files from "node1" to a server which has it's Cluster Shared Volume beeing managed by "node2" causes traffic to run over the Network from "node1" to "node2", and then using the ISCSI-Connections on node2.
(Even if node1 can access the storage with its own ISCSI-Connections)
- Copying files from "node2" to a server which has it's Cluster Shared Volume beeing managed by "node1" causes traffic to run over the Network from "node2" to "node1", and then using the ISCSI-Connections on node1.
(Even if node2 can access the storage with its own ISCSI-Connections)
- Copying something from nodeX to a vhd beeing managed by nodeX as well uses the ISCSI-Connections of nodeX - fine.
Performance - strange Behavior 1:
Due to the layout I would assume that a node writting "directly" to the ISCSI-Storage is faster than using redirected-access.
However:
The first peak shows data beeing moved from "node1" to a Cluster Shared Volume beeing managed by "node1": ~ 60 MB/s
The third peak shows data beeing moved from "node1" to a Cluster Shared Volume beeing managed by "node2": ~ 90 MB/s
??? (!)
Image may be NSFW. Clik here to view.
MPIO-Policies
On the Internet i read alot about different MPIO-policies. The common sense seems to be "Least Blocks is beeing the fastest, while round Robin sucks" - However MY tests are resulted in EXACTLY the oposite result:
Writing 2 TB of data with "Round Robin": 166 MB/s on average
Writing 2 TB of data with "Least Queue Depth": 189 MB/s on average
Writing 2 TB of data with "Least Blocks": 102 MB/s on average
Writing 2 TB of data with "Weighted Paths": 118 MB/s on average
Beside Performance the NIC-Utilisation doesnt make any "sense":
Round Robin: "All nics used equal" (that's fine)
Least Queue Depth: Nic 1,2,3 is used with a distribution of
50:25:25 Least Blocks: Only primary nic is used, distribution of 100:0:0 Weighted Paths: (Configured to have one prime, sec, third path per device) results in :60:35:5 Thoughts?
We can "live" with that peformance, I just really wonder "how" these results can be explained?
Especially why a "redirected access" seems faster than "access from the node, handling the redirected access of the other node?" - And Why our cluster seems to use "redirected Access" all the time?
Live-Example:
Currently i'm moving Data, sitting on the local pool of hyperv1 (Which has full ISCSI-Access to the Cluster shared volume in question)to a Cluster Shared Volume beeing manged by hyperv2:
- 83 MB/s are send over the (Heartbeat-)Network from hyperv1 tohyperv2
Image may be NSFW. Clik here to view.
- Networks on the cluster are configured as:
(Maybe I missunderstood the "Cluster Only" Setting?)
Image may be NSFW. Clik here to view.
- hyperv2 is Writting the data to the NAS (Least Queue Depth in this example)
- Why doesn't hyperv1 write the data using it's own ISCSI connections?
This is a follow up question to: "DFSR - Reversing replication on a two node pair" (sorry but I cant include links as apparently my account cannot be verified yet) in which I am attempting to reverse replication in a 2 node DFSR setup
from a read/write to a read-only destination node.
The answer to the previous question advised that blowing away the existing DFSR configuration and setting this up from scratch was the safest way of reversing the current roles.
However, if all DFSR settings, databases, DFSRPrivate folder etc. are removed, is it safe to assume that the actual file/folder content in source and destination locations is identical and can be effectively treated as pre-seeded? I've run some exhaustive
checks via powershell that indicate that the file hashes match in both locations. My only concern is some anecdotal evidence that suggests that issues may arise if I follow this practice and that its not recommended by Microsoft.
If doable, this would cut down on the robocopy process as part of establishing the DFSR configuration and I would just need to setup DFSR using the powershell cmdlets, export the db from the source location and import it into the destination location as
per the process I've used successfully in the past.
Just want to find a tool or any script to run on windows fileserver
which hosts more than 30 share folders copied from legacy server 2003. Is there a tool which can produce a log of accessed share folders for lets say 3 months time and find the share folders not accessed for months may be years so that can be deleted safely.
Tried some tools found on this websitehttps://www.raymond.cc/blog/track-who-modified-or-deleted-files-in-your-shared-folder/ but
all of them is kinda shows current share accessed by users also tried sysinternal tools no luck.I know I can see sessions on computer management but its also not helping me as I want to go back couple of months. Checked the audit on GPO but we
only do failure audit on object access. Help!!!
I am trying to create tiered storage on single HDD and SSD. It is created successfully via powershell and gui, but when I try to use Set-FileStorageTier commandlet it shows:
Set-FileStorageTier -FilePath H:\ACCTRES.dll -DesiredStorageTierFriendlyName tt_Microsoft_HDD_Template
Set-FileStorageTier : The specified volume does not support storage tiers.
When I try to use defrag:
PS C:\Windows\System32> defrag H: /G
Microsoft Drive Optimizer
Copyright (c) 2013 Microsoft Corp.
Invoking tier optimization on New Volume (D:)...
Tier Optimization: 100% complete.
The request is not supported. (0x80070032)
OS is Windows 2012R2.9600, freshly installed (without any updates or software), but same thing happens on completely updated OS.
SSD - SAMSUNG MZ-7KM4800 SATA (480GB)
HDD - Toshiba 500GB SATA
HDD - WD Enterprise Storage 2TB
Server ASUS PS4/TS300: tried on internal Intel storage controller and on LSI MegaRAID 9266-4i JBOD.
I have two servers in the same site as DFS partners. Both servers are Domain Controllers, File Servers, and hosting DFS namespace and replication. I changed the referral of the namespace to point to DC1. I did all the folders as well to point to DC1. My
staging quota is about 50 GB and the conflictanddeleted folder is 30 GB. I the folders are like this
DC1 and DC2:
\\domain.com\SharedFolders
SharedFolders is pointing to C:\folder on dc1 and C:\folder on dc2
The replication group replicates C:\folder on DC1 and C:\folder on DC2.
DC1 is a sending member to DC2. Then DC2 is also a sending member to DC1
The size of C:\folder is 62.0 GB on each server
The problem is three users have reported that they were working on a excel file. They went to close the file and it prompted them if they wanted to save. So, they hit yes. After they go back to share looking for the excel file, it has disappeared. I looked
on each file server in C:\folder for the file and it isn't there.
I see in the logs for the missing files EventID 4412: "The DFS Replication service detected that a file was changed on multiple servers. A conflict resolution algorithm was used to determine the winning file. The losing file was moved to the Conflict
and Deleted folder."
I go to the ConflictandDeleted folder and give the user the file. It has happened 3 times now to three different users.
On occasion, files and/or full folders get deleted or moved on this server. I would like to have an audit trail to trace issues, when they occur. Is this a function that can be done in Windows Server 2012 R2? If so, what steps need to
be taken to implement this?
On the same windows 2012 R2 fileserver, I have two folders (well, hehe... of coarse I have more than two).
One named "Users" which includes personal folders for all users in the Active Directory. Users are created with connect z: to \\fs01\users\%username% at user creation, their personal folder are automatically created and set with correct permissions.
Browsing \\fs02\users you can only see you own folder - not any other user folder.
I have another folder there... shared as \\fs01\functions - this contains subfolders carrying names for specific functions for the organization. Users are grouped into "function groups" and permissions are set on these subfolders according to group
functions.
Now... when browsing \\fs01\functions it is possible to view all subfolders in \\fs01\functions - even the ones you do not have permissions to view (it is not possible to open/execute them but they are listed)
Permissions on shared root folder is "everyone full control" on both. NTFS permissions on root folder are (once again on both folders) Everyone Read and Execute, list folder content, read and "applies to" are set to This Folder Only (administrator
has full control).
For me there are no difference in the permissions being setup - but on \\fs01\users it works as wanted (folders not permitted to view/execue are not listed) - not on \\fs01\functions
Am I forgetting something? What is the correct approach in this case?
We have a bit of an interesting request/question. We have been asked to apply permission to a folder structure so that staff can only create files in the very bottom directory of a structure, but are not allowed to create folders. I have been looking as
scripting this via icacls but can't figure out which permission I need to apply to allow only the creation of files.
Also so fare every time I have tested this it keeps applying the permission as Special Permission. I have included a copy of the structure below does anyone have any suggestion as to what might be a better way about going around applying the permission that
are being requested.
1.0 - Root Folder - Read/Execute
1.1 - SubFolderL1 - Read/Execute
1.2 - SubFolderL2 - Read/Execute
1.3 - SubFolderL3 - Read/Execute/Create File Only at this level - staff not allowed to create folders.
I hope that this question makes sense. I have a powershell script to find all the bottom / empty directories I just need to get the permissions part right now. Does anyone have any suggestions as to how we might be able to go about completing this?
I know that when my DHCP server goes down,all the DHCP clients will have APIPA address with IP range of 169.254.x.x.
My doubt is, after all the clients are assigned to APIPA address,can all the clients can do file sharing among them?. What are all the services that
will work and don't work on APIPA?
My organisation has a folder ‘A’ shared and it is replicated using DFS. This folder is replicating all of its subfolders under it. I have been requested to replicate a particular sub folder within folder ‘A’ to a group of users. But this server they want
this replicated to is not in a safe location, so this is why they only want this 1 particular folder that sits within folder ‘A’ to be replicated to them rather than all the other folders within folder ‘A’. Now I can set permissions to the users so they can
only access the particular subfolder within folder ‘A’, but it then comes down to if someone managed to steel the server from its location, as it will be replicating all the folders, all the folders will be on the server, so that will be a security risk. So
is it possible to replicate a folder already being replicated under another folder, separately to particular users on a server? Because when I try it I get errors saying this folder is already being replicated under …..
Metadata is consuming high memory in one of the test file server. I've configured dyncache service to fix the maximum memory of meatdata, but still it's consuming more memory. Is there any other configuration needed in dyncache?
I have Server 2012 with one storage pool and one virtual disk. The virtual disk uses layout parity and thin provisioning. It contained 4 physical disks. One of the physical disks failed. It was pulled and larger replacement disk was added.
The server manager now lists the failed drive as "Retired". Every attempt to remove the disk results in:
Error removing physical disk: There was an error removing {179f49b7-7657-11e2-93ea-806e6f6e6963} (fileserver). One of the physical disks specified could not be removed because it is still in use.
If I check the properties of the virtual disk, it states under health: "Physical disks in use", and lists the retired drive as "Lost Communication".
The physical drives have lots of free space, and the new drive has been added to the storage pool (but not the virtual disk). The "repair virtual disk" option is grayed out.
It seems I cannot attach the virtual disk until I remove the retired drive.
How can a disk that's sitting unplugged in another room be "in use"? How do I remove the retired drive?
I am facing very critical issue with Windows Server 2008 R2 Standard Edition. All External Storage are become unwriteable. In this storage we are taking backups of all Database and Application server. I Checked all Security settings like ownership and
attributes, but the problem is remain same. i also change HDD but problem remain same in new HDD also. Please help me, i dont have much experience to resolve this issue without your help so please help me.
Regards,
Yograjsinh ZalaImage may be NSFW. Clik here to view.
I have a File Server cluster in 2012 R2. I implemented this new server about three months ago. I migrated the file server from 2008 R2. The new implementation consist of two physical Dell servers connected to a EqualLogic SAN over an ISCSI network.
We notice last week that we can't map a drive using alternate credentials. It's not an authentication issue since you can login with those alternate credentials and map the drive but it doesn't work if you want to map the driver with different
credentials. We are getting "Windows Cannot access \\servername\path\ Check the spelling of the name. Otherwise there might be a problem with your network."
Weird thing is that I can map the C:\ drive using alternate credentials which is local to the server.
I've searched online but I haven't found a solution yet. Any recommendations?
I have a WIn2012R2 NTFS deduplicated volume, 3.5 TB in more than 5 million files
Deduplicatin rate of =~50%
The data is on a mounted folder in c:\DATA
I just deleted more than 165GB of files, more than 155.000 files, SHIFT+DEL, taked a little longer, but ok, data was deleted
In the GUI, properties of the volume, in that case, first ptoperty of C:\DATA , properties again in Mounted Volume and the volume shows 5 GB free space. IN the command line dir \ --> 21 Dir(s) 5.876.875.264 bytes free, so the same space, but makes no sense, because i´ve just SHIFT+DELETED a lot of data
Last month, after a restore test we noticed the same behaviour, we ran chkdsk /r/f and no results, several days later, the disk sapce was reclaimed, apparently without intervention
We have a problem with DFS Namespaces hosted on File Server clusters. We have two Windows Server 2008R2 Servers hosting 13 File Server resources, along with DFS-N and DFS-R. Each resource is mapped to a shared drive allocated from the SAN.
i.e FileServer 1 uses a disk (lets call it D:\) from the storage which is presented on both nodes, each file server has two Client access points (e.g FS01 and FS001). We have 13 such file server resources.
Target scenario was to migrate the DFS Cluster to Windows 2012.
So two new Windows 2012R2 boxes were installed, Failover Cluster, DFS-N and DFS-R components were setup.
Cluster configuration copy was done from old cluster to the new Windows 2012r2 cluster. This copied the cluster configuration, resources and corresponding IP addresses.
The shared resources (storage disks) were mapped to the new servers. The old cluster was shutdown and new resources all came online on the new cluster.
After this migration users were unable to access shared data (which was previously working),there were presented with the error messages - Access denied or The target account name is incorrect.
Accessing the file server shares directly via Client access point names i.e
\\FS01\sharename and \\FS001\sharename (instead of dfs namespace), was prompting the users with a username and password and this didnt work (even after putting in the username and password).
Eventually we had to move back resources to the old Windows 2008R2 cluster, and everything was restored.
If anyone has done this, can you please share your thoughts.
List of things checked:
DNS - A records and PTR
Firewall is turned off
Reboot of both nodes
No antivirus