Quantcast
Channel: File Services and Storage forum
Viewing all 13580 articles
Browse latest View live

DFS replication not working

$
0
0

I am trying to replicate 1 drive to another location. This is a 

When I check event logs I get error below. I have another replication setup on this server that goes to the remote replication server this one points to and it is working just fine. I have tried to recreate the DFS many times but continue to get the same error. Both drives are hard drive image files where one was able to replicate and the other causes error below. Please advise what I can do to get this working as I do not want to have to replicate one folder at a time off the root.

Server OS is Windows Server 2012 R2

The DFS Replication service failed to replicate the replicated folder at local path H:\ because the local path is not the fully qualified path name of an existing, accessible local folder.
 
Additional Information:
Replicated Folder Name: Z
Replicated Folder ID: E3086529-AA91-4414-823C-B4368876541Z
Replication Group Name: CPM Drive Replication
Replication Group ID: AEDAA9C6-13B7-41D2-B8C8-988603F3E758
Member ID: F20BACB8-3B53-4C0F-9031-6C8DDAB3A1AB


problem: Windows treats some files on c: as network assemblies

$
0
0

Hi,

I am facing a strange problem where I look for a solution (workaround known)

A customer installs our application, and complains some anomalies in the GUI. The application is mostly .net / c# and consists out of 500 dll files, all linked together via manifests. Very recently this customer rolled out our software on Windows 2016 and then it turns out that Windows suddenly treats some files as if they are on an untrusted network locations.

It keeps telling me "It was attemted to load an assembly from a network location which was suported in earlier versions of dot net ...  HRESULT  0x80131515"

But how can this happen? It's .net 4.72 and on all other servers I have seen or under control I can install this appliation, it also passed a QA test for Windows 2016 without findings.

The affected files are located below c:\program files (x86)\ in parallel to all of the other files, but two of them trigger this error in Trace listener file.

I was not able to reproduce this in an untouched Windows 2016 environment, I also wasn't able to reproduce it in an evironmnet withour own and very strict corportate GPO - I suspect these two files have something in them that makes .net believe they are somewhere else but not on a local drive.

The logfile contains a link

documentation at Microsoft

it tells me basically that I have to change a setting in the .exe.config file

<!-- <loadFromRemoteSources enabled="true"/> -->

to

<loadFromRemoteSources enabled="true"/>

But what is the root case that triggers this behavior?


IT architect - Terminal servers, virtualizations, SQL servers, file servers, WAN networks and closely related to software devleopment (8 years + experience in VB, C++ and script langugaes), MCP for SQL server and CCAA for Xenapp 6.5


Windows 2012 DFS server has one drive DFS database in state of 2 (Initial Sync ) for more then week and a half

$
0
0

We have a windows 2012 R2 server and it stopped replicating the data from primary server to secondary server.

Initially when I checked the database status was 4 (Normal). But still it was not replicating. On restarting the DFS replication service the database went into state of 2 (Initial sync). It has been sitting in this status for more then 8-10 days.

The drive is about 5 TB in size.

How can I check what DFSR is doing and why is it taking so long for the Initial sync to happen and how can I fix it.

net use to share with folder with only read permissions gives access denied error

$
0
0

Hello everybody,

A customer uses statements like "net use \\fileserver\share\folder1\folder2" to map drives into folders where users need to work frequently. This worked fine in Windows Server 2008 R2, but in 2019 we find that the net use fails when the user has only read permissions in the target folder. net use asks for credentials, and if we pass user name and password again, it nevertheless ends with an access denied error.

Is this changed by design? Why? Or is there something we can do to re-enable net use to a read-only folder?


Best Regards, Stefan Falk

SMB 3.1.1 connection slow for file shares on Server 2019

$
0
0

Hi,

I have a problem with shares on file server windows 2019.

When i open a shared file and make changes, the process of saving takes too long even some times "Not Responding".

I manage to find out that the problem is when i make SMB connection to the file server with "Dialect" 3.1.1.

When i disable SMB2 on the file server and the connection to the file server is SMB1 ("Dialect" 1.5) i have no issues.

But the problem is that Win10 and newer versions of server do not have enabled SMB1 by default.

BR,

Aleksandar

Work Folders can't encrypt files on Windows 10 1903

$
0
0

Hi,

problems with WF encryption was discussed here many times before, but I think that now it's related to Windows 10 1903, because Windows 10 1809 is working fine.

Error message is still the same:

Sync failed. Work Folders path: C:\Users\MYUSER\Work Folders; Error: (0x80c80314) The Work Folders path has to be encrypted. You might have an application holding the Work Folders folder open or the folder might be compressed. Close File Explorer and any open files or apps, and check the folder's Advanced properties to ensure that the folder is not compressed. EventID 2100

I had tried Windows 10 1903 domain joined / not domain joined, followed tips on https://techcommunity.microsoft.com/t5/Storage-at-Microsoft/Troubleshooting-Work-Folders-on-Windows-client/ba-p/425627 but WF encryption is still failing.

Has anyone working WF on Windows 10 1903 with encryption enabled?

Best regards,

Jiri

Mapped network drives disconnect after some time

$
0
0

Hello everybody,

A customer is running Windows Server 2019 Terminal and File servers. The terminal servers map network drives to a file share on the file server. We have problems with two applications, because the network drives are in a disconnected state after some time:

- One has an Access database residing on the file server, accessed from several users. Access crashes on disconnect because its file handles to the database file do not work any longer.

- A .net desktop application is used and its .exe and .dll reside on the file server. The users start the .exe from the mapped drive. When the mapped drives goes into the disconnected state, and the application needs to load something from one of its DLLs, the application crashes with an unhandled exception. The event log says that the .net runtime could not continue because the network connection was disconnected.

We have followed https://support.microsoft.com/en-us/help/297684/mapped-drive-connection-to-network-share-may-be-lost and set the LanManServer parameters to disconnect after 5000 instead of 15 minutes (on the file server), and set the LanManWorkstaition parameters to disconnect after 65535 instead of 600 seconds (on the terminal servers). But this did _not_ help.

I guess we do not need to discuss whether Access databases are fantastic or file servers are fantastic or running .exes from file shares is fantastic ;-) The question is simple:What can we do to keep the network drives connected, please?


Best Regards, Stefan Falk

Tricky DFS Upgrade

$
0
0

Hi All,

I have inherited two 2008 DFS VM servers (SVR1 and SVR2) and need to upgrade them to 2016.  End users access some shares through DFS Namespace and some shares by \\server\sharename.  Not all shares are accessible through DFS. There is no replication between the servers.  Each server participates in DFS to host their shares but no fail over or replication is in place.  I built two new 2016 servers called SVR1n and SVR2n.

Management wants the following requirements.

1. Replace the 2008 servers with new 2016 servers

2. New 2016 servers should have the same name and IP as the old servers

3. New 2016 servers should continue to host DFS and Shares identically to legacy servers to minimize user interruption

I was thinking the following process might make sense.

  1. Export DFS Namespace information to XML from SVR1
  2. Export Share and permissions from Registry SVR1 to Reg file
  3. Remove SVR1 from DFS
  4. Rename SVR1 to SVR1x and change IP to new IP
  5. Rename SVR1n to SVR1 and change IP to match old server
  6. Import Share and permissions from reg file to SVR1
  7. Move virtual disks from SVR1x to SVR1
  8. Add SVR1 into DFS
  9. Import DFS namespace information from XML

Repeat the same process with SVR2

Does this make sense or is there a better way to do this?  Are there any gotchas I should be thinking about? 


Storage spaces confusion

$
0
0

Hello.

I'm testing some storage spaces (not S2D) related scenarios and I've encountered some strange behavior that I hope someone can explain to me. 

I have a pool of 6x 10GB drives - displayed as 56.9GB in size and 1.5GB allocated with no virtual disks created (which amounts to 6x 256MB metadata).

FriendlyName OperationalStatus HealthStatus IsPrimordial IsReadOnly    Size AllocatedSize
------------ ----------------- ------------ ------------ ----------    ---- -------------
T            OK                Healthy      False        False      56.9 GB        1.5 GB

I created 3 storage tiers (templates) for 2-way mirrored spaces with 1-3 columns.

FriendlyName  ResiliencySettingName NumberOfColumns NumberOfDataCopies PhysicalDiskRedundancy
------------  --------------------- --------------- ------------------ ----------------------
Mirror_SSD_1C Mirror                              1                  2                     1
Mirror_SSD_2C Mirror                              2                  2                     1
Mirror_SSD_3C Mirror                              3                  2                     1

1) I create a 2-way mirrored 5GB volume using this command:

New-Volume -StoragePoolFriendlyName t -FriendlyName test -FileSystem ReFS -DriveLetter D -ResiliencySettingName Mirror -StorageTierFriendlyNames Mirror_SSD_1C -StorageTierSizes 5GB -NumberOfColumns 1

Here comes the confusion:

- why does the Get-VirtualDisk displays the footprint on pool (and hence storage efficiency) is 12.5GB (40%) instead of the expected 10GB (50%) as the corresponding instance of the storage tier displays?

2) In the next example, I deleted the VD and created new one, this time using 3 columns:

PS C:\Users\Administrator> New-Volume -StoragePoolFriendlyName t -FriendlyName test -FileSystem ReFS -DriveLetter D -ResiliencySettingName Mirror -StorageTierFriendlyNames Mirror_SSD_3C -StorageTierSizes 5GB -NumberOfColumns 3

Here's the output:

- again - different footprints reported by Get-VirtualDisk and Get-StorageTier

- this time also the size is 5.25GB instead of 5GB I specified in the command while creating the volume - how/why??

3) Also, from the documentation, with 2-way mirrored space, this guarantees (at minimum) 1 disk failure. Is there a way to raise this to multiple disks (without resorting to 3-way mirror)? Or is there no way to guarantee this since storage spaces rotate the disks to which they write the stripes?

To rephrase the last question: having 1 (or 2) disk fault tolerance with 2-way (3-way) mirror makes sense for low disk count (say 4-8), however when I have (for example) 24 (or more) drives, I'm still only protected from 1 drive failure? I'm comparing this to traditional RAID1, where (in the best scenario) half of the drives can fail without data loss.

File Servers running very slow

$
0
0

Hi,

We have two file severs and both of them are part of a cluster.

Both server have 10 core cpu with 12GB RAM, Running Server 2012 R2.

We keep getting call from user to say that they cant access any of the shares from this file cluster but when we switch the server from 1 to 2 it works fine until next day we get same issue again.

  • Is they any way I can check to see what is causing this issue?
  • Why is this happening on the file servers?
  • Is they any logs I can view to get the information on what is causing the issue?

we have around 400 user who might be connected to these two servers.

Active node always using around 80-90% cpu memory is around 60-70%


2012R2 & 2016 Host Cluster

$
0
0

I am new to Cluster technology.

Have: a single 2012R2 Hyper-V Host connected to an iSCSI SAN

Would like to connect a second Host server for clustering and to facilitate the upgrade of the existing Host form 2012R2 to 2016.

Looking for confirmation on my research to-date:

1. I can cluster a 2016 Hyper-V host with a 2012R2 host.

2. Once the cluster is established I can move the VMs from the 2012R2 server to the 2016 host and then install 2016 on the old machine.

3. To accomplish #2 above I would first remove the old 2012R2 server from the cluster and then reconnect after it is upgraded to 2016.

Thanks >> Joe

Dropped Connections on File Server

$
0
0

It appears that we are having frequent disconnects from our windows server, using the File Server. The client is using a software program that allows multiple users to have access to the same file(s) inside the server. IE - Collaborating on a excel document (for instance).... As well as running quickbooks.

Both apps will become unresponsive. Is this due to some throttling, security feature, timeout, etc? Any tweaks I can consider?

We are pretty certain there is a reliable connection to the server. We also are using this server as active directory and exchange server... Running a dual xeon setup, plenty of ram.

Any ideas or tweaks to try would be appreciated.

iSCSI Space Reclamation Issue on the Unity Storage LUN on ReFs Volume

$
0
0

There are Two Unity Storage LUN connected to server running the Windows 2019 Standard OS connected with iSCSI.

When customer deletes the files or folder, from OS front we can see the free space on the volume immediately, however on Unity Storage end we couldn't able to see the space reclamation.

Unity Storage team claiming they don't see any iSCSI UNMAP request send from OS.


Tried running the defrag and Trim commands on the volumes, command are completed successfully, but storage end no changes with Space reclaim.

Capture the NetMon, but couldn't see iSCSI UNMAP sending.


So this much confirms that it’s limitations with ReFS file systems, as describe in the below MS article.

https://docs.microsoft.com/en-us/windows-server/storage/refs/refs-overview

. For SANs, if features such as thin provisioning, TRIM/UNMAP, or Offloaded Data Transfer (ODX) are required, NTFS must be used. "


Can someone please confirm on this?


FSRM shows file screen alert coming from NT AUTHORITY\System

$
0
0

Hi,

We have DFS file server in our environment configured with File Screen using File Server Resource Manager (FSRM). We have set email notification to send email to us in case user violates the file screen rule which disallow users to save certain file types (e.g .exe, .mp3, .avi, etc) on the root drive (C:\, D:\, E:\, etc). However the problem is we receive email alert from FSRM saying "user NT AUTHORITY\System attempted to save <filename> to D:\ on the <file server name>. This file is in the "Backup Files" file group, which is not permitted on the server."

Our expectation is FSRM should said user <user id/username> attempted to save file on the D:\ drive, not user NT Authority\System. Do you guys have any experience in this? If yes, can share what is the cause of this problem and ways to resolve it. We have played around modifying the email template but still it doesn't work.

Server service doesn't start - Error: 127 The specified procedure could not be found

$
0
0

Hi Guys

We've migrated a Windows 2008 R2 SP1 VM from on-premises to Azure using Double Take move tool. The server is a File sever and was working perfectly on-premises (VMWare environment). However, after moving to Azure, the "Server" service is not starting on the VM.

This is the error I'm getting when trying to start it:

This is causing file shares to be in-accessible. We're not able to access the server using \\ServerFQDN or \\ServerIPAddress

Please help.

Thanks

Taranjeet Singh


zamn



Advanced folder permission on NTFS filesystem

$
0
0

Dears all,
I'm finding a way to manage folders on a shared NTFS filesystem avoiding data loss from users accidentally deleting main folders.

I have this kind of structure:

\SHARE ROOT (read only)

--FOLDER1 (read only)
---SUB1 (read/write)
---SUB2 (read/write)

--FOLDER2 (read only)
---SUB1 (read/write)
---SUB2 (read/write)


There is a way to allow users to make files and folders in the sub directory (SUB1, SUB2) without permissions to move/delete the parent folder (FOLDER1, FOLDER2)?

Thanks for your support.

failing replication due to 'staging folder for replicated folder has exceeded its configured size'

$
0
0

2012r2 replicating to 2012r2.  I clearly missed setting the staging folders on this server and now it's so far behind I'm wondering if it would be smarter to just start the replication over?  The destination is full, the source has gobs of data in the dfsprivate directory to replicate.  I've increased the staging folders but it doesn't seem to functional at this point.  I've not witnessed any changes and it's been 5 days.  Please advise.

eric

WSE 2016 VM - Windows Search problems

$
0
0

Hi all - 

Brand new Windows Server Essentials 2016 VM on Server 2019 Hyper-V, brand new Dell T340 - all SSD storage, very nice.

Windows Search is crapping out. Indexing about 600,000 items on two volumes, it works for a minute, and then searches return an unclickable list of files with white generic icons and no data. Restarting the service works for a while. I've reindexed a bunch.

Any thoughts or ideas? Thanks!

File Server - how to speed up

$
0
0

After migrating from Novell Netware 6.5 to Windows Server 2019 access to open/save file dramaticly increse. File Server is installed in default setting. Disk via DFS is mapped by group policy. File Server / DFS in Windows Server in the same network as Novell Netware working many times slower.

Below test time working with word/excel file.

In Novell Netware 6.5
1) Open first file: < 1 second
2) Open second file: < 1 second
3) Save file after working with in 30 secund: < 1 second
4) Save file after working with in 15 minuts: < 1 second
5) Open file after 15 minutes delay: < 1 second

In the same computer with share from Windows Server 2019 - DFS:
1) Open first file: 20 second
2) Open second file: 3 second
3) Save file after working with in 30 secund: 2 second
4) Save file after working with in 15 minuts: 20 second
5) Open file after 15 minutes delay: 20 second


How to speed up File Server in Windows Server Standard 2019.
How to increase connection time to file server for 1 hour.

Configuration:
2 Windows Server Standard 2019 File Sever DFS
100 computers in domain
100 users in domain

Thank you

failover cluster environment with dedicated disks instead of shared disks

$
0
0

Hey guys,

I'm in a bit of a pickle with something. Because of reasons we are scripting our own powershell module with which we can do all lun-management on our servers instead of using snapdrive for windows (not supported anymore on windows server 2019). (using the data ontap powershell toolkit for this). used protocol is iSCSI

I have run into a problem when this module is used in a clustered environment (for use with Availability groups). When we use the scripted module to create luns on storage level and then initialize them on the server, basically they get seen as shared for some reason. The thing is, in this scenario we don't need them to be shared. It's ok for these servers to have their own dedicated storage.

Because they are put to be shared, we run into the problem that disks which we create after the cluster is made, won't automatically be put online after a rebout because of the san policy. 

https://support.microsoft.com/en-us/help/2849097/how-to-set-the-partmgr-attributes-registry-value-using-powershell

So there is a number of things which i can do: 

-I can bypass the problem apparantly by just adding all the storage (luns) first and then forming the cluster. (disks which are added before don't have the issue)

-Maybe changing the san policy on the servers in question(in clustered environments they do say you should leave it on sharedoffline

-or maybe tweak the disks somehow maybe

Is there someone who knows how i can check if a disk is shared or not (preferably by using PS)?Maybe how i can tweak it too? What would your recommondations be?

thanks in advance guys!


Viewing all 13580 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>