Quantcast
Channel: File Services and Storage forum
Viewing all 13580 articles
Browse latest View live

Creating a virtual disk for SOFS

$
0
0

I am struggling to create a virtual disk for SOFS on server 2016

I have 3 Jbods, each with 3 SSDs and i want to create a 3 column 2 data copy.

However every time i try to create this it fails. It tells me i have the wrong disk setup for resiliency type i want. However why would 3 SSDs per JBOD work?

I can create a mirror with 1,2,3 or 4 columns as long as i dont set enclosure awareness. However as soon as i try to enable enclosure awareness it gives me the error message.

Any help with how this could be set up or what i am doing wrong is appreciated.


Properly configuring test environment for switching print servers from Production's "print2008" to dev "print2016"

$
0
0

Hello - sorry if this is the wrong forum, but since it deals with printers shared over SMB, I thought it might fit in the "file services" section.

We have 500 users using \\print2008 to print right now, so I can't just stand up the new machine, \\print2016 and immediately change our DNS server to have a CNAME pointing print2008 to the new print2016 machine until I'm sure everything works.

So I thought that perhaps I could use the hosts file on my office workstation to do a simple alias.

I stick:

<ip address> print2008
<ip address>print2008.fqdn

into the hosts file and just hope it will work.

However, sadly, none of my printers previously mapped to print2008 are working on my workstation post reboot. Whenever I try to print, I get a GUI-based "access is denied error" and in Event Viewer, I get the following:

The Kerberos client received a KRB_AP_ERR_MODIFIED error from the server print2016$. The target name used was host/print2008. This indicates that the target server failed to decrypt the ticket provided by the client. This can occur when the target server principal name (SPN) is registered on an account other than the account the target service is using. Ensure that the target SPN is only registered on the account used by the server. This error can also happen if the target service account password is different than what is configured on the Kerberos Key Distribution Center for that target service. Ensure that the service on the server and the KDC are both configured to use the same password.

So, I'm guessing that for security's sake, print2016 is refusing to handle my requests because my workstation is offering them to print2008. I followed what instructions I was able to from here, like turning off StrictNameChecks and turning on DNSonWire. I also added the BackConnectionHostNames entry. Basically everything that didn't give me an error message, but I couldn't do the dreaded SPN record for kerberos authentication.... because of course, I get the "duplicate SPN found, aborting operation!" error.

Of course there's a duplicate SPN; print2008 is being used by 500 users!

If I was 100% sure that everything was working great on print2016 I'd go ahead and just make the changes in DNS, like adding the CNAME record that the link in the previous paragraph describes, but I can't riskanything happening to print2008 when it's in active use.

Can anyone recommend a good way to set up my testing environment so I can basically fool my workstation into being okay with sending "bad" kerberos tickets to print2016, and have print2016 be okay with accepting them?

Thanks.

DFSRs errors

$
0
0

Hello,

I found many errors on our event viewer related to DFS.
Do you know any solutions or what the possible error can be? 
When i check the disks they are all healthy. 

DFSRs (9420) \\.\D:\System Volume Information\DFSR\database_76D2_8381_D283_43F9\dfsr.db: A request to write to the file "\\.\D:\System Volume Information\DFSR\database_76D2_8381_D283_43F9\dfsr.db" at offset 1335164928 (0x000000004f950000) for 40960 (0x0000a000) bytes has not completed for 36 second(s). This problem is likely due to faulty hardware. Please contact your hardware vendor for further assistance diagnosing the problem.

DFSRs (9420) \\.\D:\System Volume Information\DFSR\database_76D2_8381_D283_43F9\dfsr.db: A request to write to the file "\\.\D:\System Volume Information\DFSR\database_76D2_8381_D283_43F9\dfsr.db" at offset 378175488 (0x00000000168a8000) for 32768 (0x00008000) bytes has not completed for 36 second(s). This problem is likely due to faulty hardware. Please contact your hardware vendor for further assistance diagnosing the problem

DFSRs (11080) \\.\D:\System Volume Information\DFSR\database_76D2_8381_D283_43F9\dfsr.db: A request to write to the file "\\.\D:\System Volume Information\DFSR\database_76D2_8381_D283_43F9\fsr.log" at offset 741376 (0x00000000000b5000) for 4096 (0x00001000) bytes succeeded, but took an abnormally long time (18 seconds) to be serviced by the OS. This problem is likely due to faulty hardware. Please contact your hardware vendor for further assistance diagnosing the problem.

Storage Spaces Direct and Diskpart Automount Policy

$
0
0

We're using storage spaces direct as a disk target for backup storage.  Our backup product has a direct SAN attached method for backup of VMware virtual machines versus using a proxy that uses HotAdd to send the data over the network.  This is where the Windows OS has block level access to the storage but doesn't need to mount it.  My intent was do to this with HBAs in the physical machines hosting the storage spaces direct volumes.

The requirements for the direct SAN attached storage are to disable the automount feature in Windows.  There's a variety of ways to do this, I would use DISKPART commands:

DISKPART> AUTOMOUNT DISABLE

What I can't find is any documentation anywhere if this is a supported configuration to disable automount in a storage spaces direct cluster.  The only thing I know for sure is that the storage spaces direct cluster as setup currently has this feature enabled.

I think it would be OK to disable it since the volumes are controlled by the cluster service and I'm fairly certain that handles the mounting of volumes for CSV and any other virtual disks directly associated with roles controlled by the cluster.

I'm also not sure this forum is the correct one for storage spaces direct questions, and I'd appreciate a point in the right direction on where to post if it isn't.


Deny Format of Drives for Admins

$
0
0

Hi

We have multiple LUNs / Drives on Windows servers using as backup repositories. We removed all NTFS permissions from these drives except for System and backup works fine. Now our concern is to how to deny anyone including Admins to encrypt / format the disks (this happen before during a malware outbreak). If we get an option for this or a workaround for this issue, then it will be great.

Thanks in advance


LMS

How to use Robocopy to sync two shared folders while user are acccesing it

$
0
0

HI 

We have two shared folders on server. One is primary which is shared with the user and almost about to fill and other is not shared yet but I want to move all the data from primary to new one. some of the data is partially moved in new one. How can I use robocopy to sync new folder with primary one so that I don't get duplicates of current files already in new folder andsync any changes that are being performed in primary folder to new folderwhile users are using the primary folder share. New folder will be the new share for all users eventually. 

Thanks

Access based enumeration not working in windows 2016

$
0
0
We have enable ABE on share folder but it is not apply other users to view our name folder.

How to disable / decrease MPIO cache

$
0
0

Hi, 

Testing out a new backup repository.

I have a QNAP nas with 12disks. It's currently connected to my aruba switch via 2x1Gbit links. My backup server is physical with 4x10k raid5, Win2012r2. The backup server also is also connected to the same switch with 2x1Gb links.

QNAP is a iscsi target, and MPIO has been enabled on the backup server.

When I copy a 50GB test file from backup to QNAP, the progress bar shows the speed of 350MB/s, which is impossible. After progress bar shows that the file is copied (after ~3mins), a good portion of the file is still in the memory of the backup server and both two nics are still transferring ~1Gbps for about two minutes after the file was supposed to be copied over. You can see the memory consumption first rising by 20GB, and then slowly lowering back to normal. In the receiving end the QNAP shows transfer speed of 200MB/s during whole time. 

Is there a some kind of built-in caching in MPIO? Is there way to disable, or significantly lower the amount of cache?


E: In devmgmt -> disk drives -> QNAP iSCSI Storage Multi-Path disk Device -> policies "Enable write caching on the device" is not checked.

Problems with shared Excel files disappearing with DFSR on Server 2008 R2

$
0
0
Hello all,

We have a problem while saving Excel files that are "shared" (multiple user editing) in DFS replicated network folder.

Environment:
2 x Server 2008 R2 (hosting shared Excel files in a network share, with two-way DFS replication between them)
The files are being edited on a single primary server ONLY, secondary server is a passive backup and does not accept connections if primary is available.
Users use mainly Excel 2010 on Windows 7, and also some use Excel 2007 on Vista.

The issue sometimes occurs when the users are saving a shared Excel file - the original file is deleted, and the TEMP file is not renamed to the original name, and remains like "2BE72000". I haven't been able to reproduce the issue myself, but apparently this is happening quite often (we have several thousand excel files open at a time, and we get a few cases per day).

The change that clearly triggers this is the DFSR replication. If the DFSR replication is turned on for server folder containing the shared excel files, then the issues start, and go away if DFSR is turned off. We have no problems at all with ordinary, not "shared" Excel files.

Also, no event (no sharing violation, no conflict resolution or other) is logged in DFS log when the original file is deleted and temp not renamed.

Why is Excel sometimes not renaming its temp files? As far as I understand, DFSR does not try to replicate a file if it's still in use (we get quite a few sharing violations). Therefore if Excel still has a lock on file, DFSR will not intervene and wait for Excel to finish what it was doing. Or am I wrong?

We have a changing backlog size. Usually 0-20 items, and at peak times somedays it rises for short periods to 1000 or more items. I have not noticed any correlation between the backlog size and the frequency of shared excel issues. Backlog size and replication delay does not cause a problem for us, as long as no data is lost.

The things I have already tried after searching and not finding anything that fits on the net:
Tuned up DFSR:
http://blogs.technet.com/b/askds/archive/2010/03/31/tuning-replication-performance-in-dfsr-especially-on-win2008-r2.aspx
Started a thread on Excel TechNet: http://social.technet.microsoft.com/Forums/en-US/excel/thread/4e7bd78b-dc72-4dbd-a459-edabc44acbfd

Thank You for Your help!
Sincerely
Vince

Will Storage Replica Source Log wrap during a large data copy?

$
0
0

Hi,

If I setup Storage Replica with Asynchronous replication and then start copying data to the replicated volume, it will write it to the source log volume and then to the source data volume. The default size of the source log is 9 GB. If I write data so quickly that this log fills my understanding is that it will 'wrap' which will result in a bit map replay from source --> target. I need to understand this a bit better.

  1. Is the length of time required for a bit map replay related to the amount of new data, the total amount of data, or the size of the volume?
  2. If I write data so quickly that the log continually wraps, does it restart a bit map replay?
  3. What is the best practice for copying data to the volume?

So my question is really about seeding. I am aware that I can seed to a local volume and then ship the disks or backup to the destination location (remote) however this is not always practical. Assuming I can't do that, what would be better for speed and network performance:

  1. Setup SR, then copy the data to source? (as above)
  2. Copy the data to source, then setup SR?
  3. Copy the data to source, robocopy /mir to destination, then setup SR?

There will be terabytes of data.

Thx,

Simon.


Replacing a Failed Disk in Windows Server 2012 R2 Storage Spaces two way mirror layout

$
0
0

Guys,

Anyone came across drive replacement in Windows Server 2012 R2 Storage Spaces two way mirror layout? it kinda confusing than the one you use for parity layout which is simply a matter of adding a new disk, marking the existing on retire then remove will automatically start the repairing but can't find much details about two may mirror layout.

Storage Spaces Extremely slow during data moving

$
0
0

Hi,

On my server at home I have created a Storage Pool to store all my data (Photos Video etc...): this is my config:

Dell R710
Dual Xeon X5675
Dell Perc H310
144GB RAM
Windows Server 2016

I've currently added 2 2TB HDD and created a Storage Pool with a Mirrored vDisk and then I created a volume.

I started to move my data via gigabit network from my home pc but after that I have copied 10gb of data the speed went down from 110/115 MB/S to 6/7 MB/S.
I have also tried to move the hard disk with all the data to copy on the Storage Pool from my PC to the server but again the same thing happened.

Now, I know that Windows Server 2016 isn't supported on the R710 but I have also tried with Windows Server 2012 R2 and again same problem happened.

Do you know what can be the problem? When this happen if I check task manager there's nothing strange: CPU is at 2/3% and the same for RAM.

Am I doing something wrong?

Thanks

Lost communications on a server pool

$
0
0

Hello all,

I just started with server pools.  

I have 1 4T and 4 1T drives.

I have made a simple pool for the 4 1T drives to be added together.

I went and took out 1 of the drive (hot swap) and after putting it back in the Virtual drive was at 0 byes.

So I looked at the pools and noticed that the drive I pulled out and put back in has lost communications.

I have done a reboot to try to get it back online.

The drive is still good.

Did not find anything on lost communications on a simple layout.

I am looking for anyway to get this back into a health status.

Thanks Kurt 


W2012R2 Storage Pool Healthy Drive shown as Lost Communication

$
0
0

Hello all. I hope someone can help.

Our server (W2012R2) crashed. It has a Tiered storage pool with 2 x SSD and 2 x HDD.

On reboot after the crash, the storage pool went into read-only mode as both SSD in my storage pool failed with a "lost communication" error. The drives show up just fine in Disk Management. The drive SMART parameters are good. 

It seems as if the pool and the drive has amnesia and forgot they were made for each other.

This is a production File Server and we urgently need it back up. Very much appreciate any assistance or leads provided.

Unfortunately I cannot post images until microsoft verifies my account.

-----

I've done the following to no avail (in the hope that I can at least access the data in the pool so I can copy the VM to another location and start it):

Get-StoragePool <PoolName> | Set-StoragePool -IsReadOnly $false

Get-VirtualDisk -FriendlyName "Disk Satu" | Get-Disk | Set-Disk -IsReadOnly $true

Get-VirtualDisk -FriendlyName "Disk Satu" | Get-Disk | Set-Disk -IsOffline $false

Repair-VirtualDisk -FriendlyName "Disk Satu"

-----

I've tried adding drives and it fails due to the pool being in an operational-state of read-only, even when the IsReadOnly flag is false.

I've considered forcefully retiring the SSD (one or both) but I am concerned this will destroy the integrity of the overall pool.


Server 2012 NFS & nfsnobody problem

$
0
0

We have a fresh 2012 server with some local storage that we want to share with some linux users.

We don't (yet) have an identity mapping source (AD or User Name Mapping server), instead we are relying on group and passwd files in c:\windows\system32\drives\etc\.

After a bit of persuading we managed to get it to accept this passwd file:

root:x:0:0:root:/root:/bin/bash
username:x:nnnnn:ggggg:lastname, firstname:/home/UK/username:/bin/bash

where nnnnn is my id and ggggg is the GID of 'domain users' for my AD domain

Group file looks like this

it:x:ggggg:

where ggggg matches the number in the passwd file.

restarting the server for NFS service and checking eventvwr (identity mapping) shows this:

C:\Windows\System32\drivers\etc\group will be used as a mapping source.

and

C:\Windows\System32\drivers\etc\passwd will be used as a mapping source.

And a warning:

Server for NFS is not configured for either Active Directory Lookup or User Name Mapping.

(Which I think we should expect as we know that we have not configured either of these)

From a test linux box, logged on as the user in the passwd file, I can create files (test1 and test2 below):

-rwxrwxrwx 1 nfsnobody nfsnobody 0 Jan 31 11:09 qaef.txt
-rw-r--r-- 1 username nfsnobody 0 Jan 31 14:55 test1
-rw-r--r-- 1 username nfsnobody 0 Jan 31 14:56 test2

You can see above the 'nfsnobody' entries, my linux friend here suggests that the group entry may be wrong in some way, but can't see how.

Any help appreciated, Thanks
Andy



Storage Spaces Direct Dirty Count Greater Than Limit - Volume is Online

$
0
0

From reading on troubleshooting and a long support call the Storage Spaces Drt counters for a storage spaces direct volume are never supposed to be greater than the limit.  If the limit is breached then the volume is supposed to stay offline.  I have noticed that some volumes go offline when I reboot a cluster node after putting it into maintenance in failover cluster manager.  I haven't yet had an opportunity to try it yet, but there are a few commands to run.

Get-StorageFaultDomain -type StorageScaleUnit | Where-Object {$_.FriendlyName -eq "cl-hv03"} | Enable-StorageMaintenanceMode

https://social.technet.microsoft.com/Forums/en-US/4fc1fb86-61fa-4976-8b3f-9e314586fef8/storage-spaces-direct-cluster-virtual-disk-goes-offline-when-rebooting-a-node?forum=winserverClustering

I have KB4462928 installed on all the cluster nodes as well.

Where this really gets interesting is that I have a volume that is online with a dirty count greater than the limit, and I'm not sure how this is possible and if there is a way to clear the DRT backlog.  Even when nothing is writing to the volume, it stays above 270 when the limit is 255.

I'd post links and screen shots but my account hasn't been verified yet.

Server 2016 Work Folders and Windows 10 1803 Sync Question

$
0
0

We've just deployed Work Folders in our organisation and so far we're loving it. It solves a lot of issues and enhances our user's experience.

I have a question about .ini files. 

We have essentially created a work folder on their laptops containing C:\Users\Username\Work Folders\...

  • Documents
  • Desktop
  • Downloads

We've used GP to redirect their associated folders to this new location using folder redirection. This allows remote workers to save and open files without the files going over the WAN everytime as we've set it to always be available whilst offline. 

One big problem we have at the moment is with the sync icons. When I look at File Explorer, it'll show Desktop, Downloads and Documents with a BLUE Sync Arrows. If I click on those folders, everything is green ticks. This simple GUI glitch throws the user into thinking it's not synchronising correctly. My understanding of this behaviour is a result of the desktop.ini file. Work Folders ignore .ini .tmp files. So if a user has those, then the top level folder shows as continuously syncing, even though it's not. 

Now, I'm new to this feature and as far as I understand, it has been about since Windows Server 2012 R2. Does this always happen? Is it normal behaviour to show the top level folders with the blue sync arrows, even though everything inside has synced or is it a relatively new GUI bug that affects Windows 10 1803?

You can replicate this behaviour simply by adding a .ini file inside a work folder. The top-level folder will show the syncing icons. The sync control panel will show green - syncronised. 

Does anybody have a workaround so I don't throw the users experience off? I'm getting support calls because they think it' not syncing.

Thank you

ROBOCOPY new user questions

$
0
0

Greetings,

When running robocopy I am experiencing the following. Can you correct/explain what is happening, please?

1. I get an extra directory under the total column and 1 skipped directory. Yet the number of files in total and copied columns are the same?. Note the number of bytes is also the same.

2. One of my directories appears to copy all but hangs up in some kind of a loop necessitating a restart of robocopy. I am never confident that the copy is ok. How can I identify the problem?

3. Options listed in the log are different than those specified.

robocopy "E:\mmarion.4" "D:\mmarion.4" /E /256 /NC /NFL /NS /NDL /NP /TEE

becomes

Options : *.* /256 /NS /NC /NDL /NFL /TEE /S /E /DCOPY:DA /COPY:DAT /NP /R:1000000 /W:30

Thank you. MM



iSCSI Initiator issues on Server 2019

$
0
0

Hello, I'm trying out Windows Server 2019 and I'm having some problems with the iSCSI Initiator. After several hours of debugging also against 2016, I managed to pinpoint that the following WMI query fails only on 2019:

Get-WmiObject -Namespace root\wmi -Query "SELECT * FROM MSISCSIInitiator_TargetClass"

with the following error:

Get-WmiObject: Provider load failure

I haven't looked much further as to why this failure occurs, but is it possible that something got broken in 2019?

Thanks in advance!
Bareld

How can I generate a list with no file changes using Robocopy to show EXTRA and New files between two directories?

$
0
0

I want to do a file comparison between two directories and warn via email if file differences are found, either new or extra files.

I don't want to update the directories in any way, but get the list to a text file.

I use Robocopy already in powershell scripts so would love to know how to accomplish the above via PS script.

Or, if there is a way to do this in C# I could use that too.  

Any help greatly appreciated!!

Viewing all 13580 articles
Browse latest View live