Quantcast
Channel: File Services and Storage forum
Viewing all 13580 articles
Browse latest View live

VSS file copy of SMB share over Storage spaces direct

$
0
0

Folks

I want to implement storage spaces direct over 2 servers. on top of this storage spaces I want to implement files service. ("smb file share"). I want then to do the snapshotting from this file system by using the scheduler of VSSadmin.

Is this working ?

Is it possible to pick up the earlier versions of files that have been changed ( and stored as snapshot) and redirect to "another place for being copied to an off-line store.

Has anyone any exeprience


Storage Replica - change replica network - Set-SRNetworkConstraint

$
0
0

Hello,

I tried to change for my storage replica the network to an dedicated replcation network.

On Server 1 Replicationetwork has "InterfaceIndex 5"

On Server 2 Replicationetwork hast "InterfaceIndex 3"

I use from Server1:

Set-SRNetworkConstraint -SourceComputerName "Server1" -SourceRGName "Server01rg01" -SourceNWInterface 5 -DestinationComputerName "Server2" -DestinationRGName "Server02rg01" -DestinationNWInterface 3

but I get an error:

Set-SRNetworkConstraint : Die Netzwerkeinschränkung für die Replikationsgruppe Server1 kann nicht aktualisiert
werden.
In Zeile:1 Zeichen:1
+ Set-SRNetworkConstraint -SourceComputerName Server1 -SourceRGName Server1 ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : ResourceUnavailable: (MSFT_WvrAdminTasks:root/Microsoft/...T_WvrAdminTasks) [Set-SRNetwo
   rkConstraint], CimException
    + FullyQualifiedErrorId : Windows System Error 64,Set-SRNetworkConstraint

Any ideas?

Regards,

alpina

Storage Spaces Direct - unable to configure Journal (cache) drives

$
0
0

I try to configure Storage Spaces Direct (S2D) on Dell R730xd, 3-node cluster. Each server has 2xSSD and 4xSATA drives dedicated to S2D pool. I have a simpble pass-through HBA controller, Dell HBA330, recomented by Dell for S2D solutions.

The problem is I cannot set the SSD drives as cache (Journal) drives.

Cluster validation for S2D is successfull.

Manually setting SSD drives as Journal has no effect.

Do you have any suggestions?


-- Konrad Puchala

Forum FAQ: Temporary files are not replicated by DFSR

$
0
0

Summary

 

Someone may notice that DFS Replication (DFSR) is not replicating certain files even if most of the other files can be replicated successfully; the reason is that the temporary attribute is set on these un-replicated file.

 

By design, DFSR does not replicate files if they have the temporary attribute set on them, and it cannot be configured to replicate it. The reason DFSR does not replicate files with the temporary attribute set is that they are considered short-lived files that you would never actually want to replicate. Using the temporary attribute on a file keeps that file in memory and saves on disk I/O. Therefore applications can use it on short-lived files to improve performance.

 

Symptom

 

Supposed you have setup DFS shares and DFS replication group between 2 or more DFS replication servers. Most of the content under the DFS target folder can be replicated to another DFS server; however, only a few of files cannot be replicated.

 

When you use Fsutil to check the un-replicated file, you will see Temporary Attribute on the File.

For example: Checking the Temporary Attribute on a File

fsutil usn readdata c:\data\test.txt

 

Major Version : 0x2
Minor Version : 0x0
FileRef# : 0x0021000000002350
Parent FileRef# : 0x0003000000005f5e
Usn : 0x000000004d431000
Time Stamp : 0x0000000000000000 12:00:00 AM 1/1/1601
Reason : 0x0
Source Info : 0x0
Security Id : 0x5fb
File Attributes : 0x120
File Name Length : 0x10
File Name Offset : 0x3c
FileName : test.txt

File Attributes is a bitmask that indicates which attributes are set. In the above example, 0x120 indicates the temporary attribute is set because that is 0x100 and 0x20 (Archive) = 0x120.

Here are the possible values:

READONLY

0x1

HIDDEN

0x2

SYSTEM

0x4

DIRECTORY

0x10

ARCHIVE

0x20

DEVICE

0x40

NORMAL

0x80

TEMPORARY

0x100

SPARSE_FILE

0x200

REPARSE_POINT

0x400

COMPRESSED

0x800

OFFLINE

0x1000

NOT_CONTENT_INDEXED

0x2000

ENCRYPTED

0x4000

 

Resolution

 

Removing the Temporary Attribute from Multiple Files with Powershell

 

To remove the temporary attribute, we can use PowerShell which can be installed from here. After PowerShell is installed, please open Powershell prompt (Start, Run, Powershell or from the Programs menu) and run this command to remove the temporary attribute from all files in the specified directory, including subdirectories (in this example, D:\Data):

 

Get-childitem D:\Data -recurse | ForEach-Object -process {if (($_.attributes -band 0x100) -eq 0x100) {$_.attributes = ($_.attributes -band 0xFEFF)}}

 

Note: If you don’t want it to work against subdirectories just remove the -recurse parameter.

 

More Information

 

DFSR Does Not Replicate Temporary Files

http://blogs.technet.com/askds/archive/2008/11/11/dfsr-does-not-replicate-temporary-files.aspx

 

Applies to

 

Windows Server 2008, Windows Server 2008 R2

Storage Replica + VSS = problem

$
0
0

Hello

I have two servers with Windows Server 1803. Currently I have a volume of 60TB (replicated via Storage Replica) between these servers and with VSS configured on the same volume. This setup worked for a few days, and now every time the VSS service tries to take a snapshot of the volume, the server hangs and I'm forced to reboot or wait a few hours until I regain control of my server. In the Event Viewer I have the following message:

VssAdmin: Unable to create a shadow copy: The shadow copy provider timed out while flushing data to the volume being shadow copied. This is probably due to excessive activity on the volume. Try again later when the volume is not being used so heavily.

Except that the snapshot runs overnight without any user access, and no other parallel jobs. My VSS configuration:

For volume: (E:) \\? \ Volume {f26d0547-1ad9-4080-866e-24f02752ac93} \
   Shadow Copy Storage volume: (E:) \\? \ Volume {f26d0547-1ad9-4080-866e-24f02752ac93} \
   Used Shadow Copy Storage space: 27.4 GB (0%)
   Allocated Shadow Copy Storage space: 74.6 GB (0%)
   Maximum Shadow Copy Storage space: 9.00 TB (15%)

I confess I do not understand where the problem comes from. Any idea? Do I need to configure VSS on another volume?

Sorry for my bad english.

Thank you.

msDFS-NamespaceAnchor missing.

$
0
0

Hi,

one of our namespaces are getting the 'element not found' when investigating i have found the msDFS-NamespaceAnchor is no longer in AD. 

I am unsure on how to get this back and why it would have vanished in the first place. We have 2 more namespaces running fine.

DFS is on a stand alone server and does not replicate.

Any help would be greatly appreciated

SQL Windows Authentication Issues

$
0
0
Recently we have changed our SQL Access from SQL Accounts to Windows Authentication.  We are using Active Directory groups to grant rights to users throughout our company.  Recently we have had two instances where an individual user is unable to access a particular database but other users within that same AD Group are able to access the database.  We have verified that the AD Groups are applying to the users that are unable to access the database and we have had them try to login to the database on different workstations.  Wondering if anyone has seen an issue similar to this or might have some additional insight into the issue at hand.

Is NTFS range tracking ever working?

$
0
0

Hi,

On a Windows 10 client range tracking is enabled on NTFS volume E:. From the query (DeviceIoControl(FSCTL_QUERY_USN_JOURNAL)), RangeTrackngChunkSize is 16KB, and RangeTrackFileSizeThreshold is 1MB.  But for a file larger than 2MB if the first 2 bytes and the last 2 bytes are modified before the file close, I believe this will have two extents. But there is only one extent, any change inside the large file is considered as a single change starting from offset 0, w/ file size as the extent length. This is NOT correct. See below logs (from my own program) for details.

// Log starts below...

Range tracking is enabled on this journal since USN 760

Journal Info...
MinSupportedMajorVersion=2
MaxSupportedMajorVersion=4
RangeTrackChunkSize=16384
RangeTrackFileSizeThreshold=1048576
FirstUsn: 0
NextUsn: 8992

======USN Record V3======
USN: 8672
File name: large.txt
Reason: 4

======USN Record V3======
USN: 8752
File name: large.txt
Reason: 6

======USN Record V4======
USN: 8832
Reason: 80000006
RemainingExtents: 0
NumberOfExtents: 1
ExtentSize: 16
Extent 1: Offset: 0, Length: 2129920

======USN Record V3======
USN: 8912
File name: large.txt
Reason: 80000006

Press any key to continue..

Thanks,
Jing


DeviceIoControl(FSCTL_USN_TRACK_MODIFIED_RANGES) does not work to change @ChunkSize and @FileSizeThreshold

$
0
0

Hi,

I'm trying to enable the range tracking feature on a NTFS volume on Windows 10 desktop. The range tracking is enabled. But the chuck-size and the file-size-threshold can never be changed. They are always 16384 and 1048576, respectively.

For example:

C:\WINDOWS\system32>fsutil usn queryjournal e:
Usn Journal ID   : 0x01d4606e19f40518
First Usn        : 0x0000000000000000
Next Usn         : 0x0000000000000668
Lowest Valid Usn : 0x0000000000000000
Max Usn          : 0x7fffffffffff0000
Maximum Size     : 0x0000000000400000
Allocation Delta : 0x0000000000100000
Minimum record version supported : 2
Maximum record version supported : 4
Write range tracking: Enabled
Write range tracking chunk size: 16384
Write range tracking file size threshold: 1048576

C:\WINDOWS\system32>fsutil usn enablerangetracking c=1024 s=2048 e:

C:\WINDOWS\system32>fsutil usn queryjournal e:
Usn Journal ID   : 0x01d4606e19f40518
First Usn        : 0x0000000000000000
Next Usn         : 0x0000000000000668
Lowest Valid Usn : 0x0000000000000000
Max Usn          : 0x7fffffffffff0000
Maximum Size     : 0x0000000000400000
Allocation Delta : 0x0000000000100000
Minimum record version supported : 2
Maximum record version supported : 4
Write range tracking: Enabled
Write range tracking chunk size: 16384
Write range tracking file size threshold: 1048576

I did try w/ other numbers, but it turns out that they are fixed to 16384, 1048575. No error is returned.

Also, I tried DeviceIoControl(FSCTL_USN_TRACK_MODIFIED_RANGES) which succeeds w/ error. But the @ChunkSize and @FileSizeThreshold just don't take effect.

Is this a known issue?

Thanks,
Jing

make subfolders of 2 shared folders visible with DFS Namespace

$
0
0

I have two folders shared from one of the server. they both are mapped to the user sessions with 2 different drive letters.

Now I want to create a new shared folder which includes both of them, I found out I can use DFS name space.

I need to have one shared folder which has all the sub folders of both of the other folders.but when I create the DFS NS and add both folders as a target folder, it just shows the sub folders of one of the folder.

Am I missing something here? is this really possible with DFS NS? if so, can you please provide the right steps for this?



there is not enough space available on the disk (s)to complete this operation

$
0
0

When I try to expand the disk on my windows 2008 server I get this error:

 

there is not enough space available on the disk (s)to complete this operation

 

any suggestion are greatly appreciated

 

Thanks

 

AL

Intermittent file share issues Server 2012

$
0
0

Hi Technet people.

We have an intermittent file share issue happening around once a week (Each Weekend) at seemingly random times, started around 3 weeks ago.

TIMELINE –

Week 1 – Sunday 07:00 - Issue resolved itself with no action taken, file shares were unavailable for around 15 mins

Week 2 – Saturday 7PM – Failed the cluster services over to passive node, issue cured. Monday 14:43 - Issue resolved itself with no action taken, file shares were unavailable for around 15 mins

Week 3 – Sunday 03:20ish – Failed the cluster services over to passive node, issue cured

The issue –

File server setup for only this purpose, Physical server 2012, clustered, with shared storage

File shares are intermittently not available (even the admin shares), sometimes they come back on their own after around 15 – 20 minutes.

Troubleshooting during the issue –

Server is

  • Unable to UNC to any share C$ etc.
  • Server is Pingable
  • RDP works
  • All shares show online in the failover cluster manager
  • Issue has happened on both nodes, one node is up to date with windows updates one is not (2 months behind) during the issue the passive node is not affected.
  • No specific errors in the windows event log
  • We have another file cluster at a different site, exactly the same OS / roles / hardware / firmware / Storage etc, not experiencing any issues.

As I said above sometimes after 15 – 20 minutes the issue resolves itself, or a failover to the passive node resolves the issue. We became aware of this problem as the server hosts a folder redirection share. When the file shares are unavailable this seems to crash computers that use folder redirection, and they are unusable during the outage.

The monitoring tool has not reported any issues with the server and it is monitoring, disks, cpu, mem, cluster services every 5 mins.

Google is of no help and there are no errors in the logs, no other servers show any issues etc. The current suspect is the antivirus however the version is in use on all other servers and presenting no issues.

NetBackup has been running during the issues but again this runs on all other servers and nothing has changed.

Any help or suggestions would be much appreciated

Windows Server 2016 Datacenter Storage Replica Question

$
0
0

Hello,

I have 2 physical servers (with Hyper V) that are running on Windows Server 2012 Standard version and using failover cluster "Cluster A". Those servers are connected to a SAN Storage "Storage 1"  through a SAN Switch.

My plans was to buy two new servers “Server 3” and “Server 4” and one new storage “Storage 2”.

I will get those new servers racked and patched to the same SAN Switch.

I will install Windows Datacenter 2016 on “Server 3” and Windows server 2016 standard on  “Server 4”

Install HyperV and Failover Cluster Manager on the new Servers

Add the two new servers to the current cluster “Cluster A”.

Move the Vms and services to the new servers “Server 3” and “Server 4” (live migration)

I will use Cluster Manager to PAUSE and then EVICT the old servers “Server 1” and “Server 2” from the cluster

Upgrade the cluster from version 2012 to 2016

Connect the new storage “Storage 2” to the cluster “Cluster A”.

Move the HyperV machines from “Storage 1” to “Storage 2”

Install windows server standard 2016 on the old servers “server 1” and “Server 2”

Create a new cluster “Cluster B”. for “Server 1” and “Server 2”

Attach “Storage 1” to the cluster B that host “Server 1” and “Server 2”

Enable storage replica service on windows server 2016 "Server 3"

Graph Bellow:


Will my plan work? Should i upgrade all the servers to Windows Server 2016 Datacenter? Or only "Server 3" can be responsible on the storage replica.

Simple and safe way to backup files using Robocopy

$
0
0

Hi

I have about 8Tb of data to backup onto an external drive which has a capacity of 7.3Tb.

What is a simple and safe command to back up all the older files ("/Minage:20180101")?

I want to avoid the problems I have had with the Windows 7 copy - (Lost date stamps and access issues for example).

To me it looks like

Robocopy S:\ E:\Backup /E /dcopy:T /minage:20180101

Where I want all folders in the S: copied (before 20180101) to a folder I have created on the external E:\backup.

Is that good?

I also want to avoid it stopping in mid process because of files being open or for some other reason. It must just skip them. do I add /R:0 at the end of the command line so it doesn't keep repeating or crash?

I am in Windows 7 but would like to know if Windows 10 would be any different.

Thanks

Justin

Windows server 2012 access denied to shared folders even for administrator

$
0
0

Hi

I have a PDC with windows server 2012R2 data center on it.I have shared some folders and files and gave access to some users from another domain network to access them but from clients when I try to open it by typing server IP or computer name in run ,even with administrator user it says access denied, but sometimes it opens the files without any problem.

Also I can open the files on systems which has not joined any domain.

Thanks


cannot access shared folders by ip

$
0
0
on windows server 2003,i can't use \\192.168.1.12 to access shared folders,it shows,“windows cannot find file 192.168.1.12”,but i can access shared folders using \\computername。Somebody can help me?

Updates blocking access to File Shares.

$
0
0

Hey all.

I have a server running Windows Server 2012 R2.  It recently updated with 2 security patches.  KB4462926 and KB4462941.  Each patch seemed to block access to the shares on the drive to all Windows 10 machines.  All other Windows Server OSs seemed to be fine and I wasn't able to test older versions of Windows as nothing is below Windows 10 in this office.  Once I removed the update the share was accessible again.  it then happened again this week with the second update.

Any idea what might be missing on the Windows 10 machines or the server to allow the connections?  SMB1 is turned off on everything.  I had tested turning it on and seeing if the connection was restored but it wasn't.

Any suggestions would be useful.

Thanks.

Richard

Identical duplicate found on file share that doesn't actually exist

$
0
0

Hello all,

I've encountered a very interesting problem on one of my clients file servers. Currently there is a department folder that suddenly has a duplicate folder with the same name. I did discover that there is a trailing white-space in the name, so though they appear identical they are not.

The interesting part is that they seem to be the same object. They share metadata (creation date, etc.) and changing attributes on the duplicate actually makes changes on the original. We cannot delete, rename, or move this duplicate folder. Rebooting the server has not resolved the issue.

Example scenario:
1. Open properties on duplicate folder > Set "Hidden" and click Apply.
2. The Original folder will become hidden.

One more piece of information: We had our backup team mount the latest backup of the disk that was taken, and the duplicate folder does not exist. 

Has anyone ever encountered a problem like this?

MSMQ cleanup Interval

$
0
0

Hi,
I would like to reduce the interval of the MSMQ cleanup from 6 to 2 hours.
I'm reading here that the default cleanup interval is 6 hours:
https://msdn.microsoft.com/en-us/library/ms704178(v=vs.85).aspx

However I can't find that registry key. Should I just go ahead and create it as new?
Thanks!

Can't take ownership of files with take own get access diened

$
0
0

Good afternoon all,

I've some files where can't take the ownership of. I've tried every and searched allot of technet and the rest of google.

When is ue:

Takeown /f *.* /r /a /d y

i get like 4 succes messages and 16 INFO: Access is denied.

I need to copy/cut the files to new storage, can someone help me please??

I am a member of the adminsitrator group
Viewing all 13580 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>