Quantcast
Channel: File Services and Storage forum
Viewing all 13580 articles
Browse latest View live

Large number of open file handles on fileserver (Server 2012R2) after user's rds session has long been terminated through log off

$
0
0

Hi,

we do have an rds farm with roaming profiles (not profile disks) consisting of 6 terminal servers ("srvts001" to "srv006" 2012 R2), 2 domain controllers ("srvdc01", "srvdc02" 2012 R2), 1 SQL/Fileserver ("srvsql01" 2012 R2).

Roaming profiles are stored on srvsql01 in share RDS. Oplocks are disabled, directory caching is disabled.

Everything was running without problems for 1 or 2 years. Recently (about 2 weeks ago) users began complaining about being logged on with a temporary profile. Investigation showed a lot of open file handles from previous (no longer existing) sessions on the file server.

E.g. user1 had logged on to rds one day and logged off in the evening (not just disconnected, really logged off) and when he logs in the other day on the rds farm he is logged on with a temporary profile due to the profile service being unable to access the roaming profile as it is (event log) "in use by another process".

Looking at the fileserver's open files we can see that a lot of files of that user profile (and of other users, too!) are shown as open - all with read option and no apparent locks.

When user1 loggs off these locks stay! Manually closing the open files on the file server allows the user to log on to the rds servers normally. Subsequent logging off may or may not create those stale open file handles.

We have no clue as to when these enormous amount of open file handles (we talk about 100s of files per user) happen. It seems to be at random and not hitting every user.

Has anyone ever met a similar problem? Or at least an idea on how to prevent logged off users to still have open files on the file server?

Any answer pointing in the direction of a possible solution or source of this problem is greatly appreciated!


DFS issues

$
0
0

Hi, 

We have 2 Windows Server 2008R2 File Servers with DFS enabled. We are had issues lately with replication and we management to fix them last week, however now we seem to have another problem. The users access the namespace \\domain.local\fs\ and are not having issues with accessing and deploying files, however I cannot find the data that they're saving on FS1 or FS2.

Any ideas of what this issue might be and how to troubleshoot it? 

Thanks in advance. 

Server 2016 - Single host Multi-resiliency on storage tiers

$
0
0

Im doing some testing with Server 2016 and storage spaces.  I would like to use use different resiliency across storage tiers but do not see that as an option when creating a new volume.  I have 2-SSDs and 6-HDDs in a pool.  When I create a virtual disk with tiers, i can only select between simple and mirror.  I would like to use mirror for he SSD tier and parity for the HDD tier.  

thanks for any tips  

Modify permission for all user drives

$
0
0

Hi, 

We have a file share repository for user drives which are created using the Home Folder field in AD under the Profile tab in the user object. 

By design, the Documents and Downloads folders in the user's home drive cannot be read by any other user. For this particular client, a number of reports are required on data which requires access to these 2 repositories. 

Is there a way of taking ownership and taking full access to all the home drive folders inside each home drive repository without ruining the user permissions? I have tried using the following takeown command but it doesn't seem to do the job on its own - takeown /f "H:\Users\name.surname" /a /r /d Y

Thanks in advance.

Workstation service wont start - error 2

$
0
0

I'm running Windows 2008 server enterprise r2 and I got hit by a nasty virus that modified a bunch of my services and its dependencies.  Since then I'm having a problem getting the workstation service to start.  When I try to start it, it reports:

Error 2: The system cannot find the file specified.

I confirmed c:\windows\system32\svchost.exe & wkssvc.dll exist and have the proper dates (they match another 2008 install I have).

For DependOnService I have:

Browser, MRxSmb10, MRxSmb20, NSI

Any suggestions?

 

Offline Files

$
0
0

Hallo,

Can anyone tell me how the CSC cache is loaded ? What happens when it is full ? Is there a Round Robin strategy for populating the cache (this is, in case of full cache, the oldest files are dropped and replaced by newer) ?

Is it a good strategy to purge the cache at log-off time ?


aldo.domini@libero.it

Monitor/Modify Kernel files

$
0
0

Hello all,

Is there a process of getting our own application (agent) signed by MS for making changes/monitoring file activities (like AV)? If so, what is the process and how much does it cost?

Thanks in advance!

ADFS/WorkFolders keeps requiring all client machines to enter password

$
0
0

I configured ADFS with a WAP reverse proxy with a back-end Work Folders server. I configured the system with proper 3rd party certificates and everything connects fine both internally and externally. However after an amount of time, usually less than a couple days, client machines will ask for the current password. If I open the work folders control panel manually for domain joined machines and click credential manager, I am prompted to enter the password. The user name is already saved. Entering the password allows the machine to sync. For workplace join machines, I am prompted for the ADFS login page where I have to enter the username and password. This sync works as well once entering the password. The problem I have is that neither the domain joined internal machines or workplace join external machines will keep the password more than a day. I even had one machine prompt for the password within about 20 minutes after re-entering the credentials.

All machines including ADFS server, WAP server, WorkFolders server, and client machines are fully updated. Clients machines are Windows 7 Pro and Windows 10. I need the Work Folders configuration on the client machine to never prompt for a password. Otherwise this utility is pointless.



Can't take ownership of files with take own get access diened

$
0
0

Good afternoon all,

I've some files where can't take the ownership of. I've tried every and searched allot of technet and the rest of google.

When is ue:

Takeown /f *.* /r /a /d y

i get like 4 succes messages and 16 INFO: Access is denied.

I need to copy/cut the files to new storage, can someone help me please??

I am a member of the adminsitrator group

The DFS namespaces service Failed

$
0
0

Good afternoon,

I have a few users who get this error randomly, can be multiple times a day can be every other day,

"The DFS Namespaces service failed to initialise the shared folder that hosts the namespace root. Shared Folder: DFS

These are drives mapped by Group Policy.

Restarting does bring them back.

I had a constant ping test running from one of the machines for 2 days to the dfs server both ip and hostname

showed no disconnection / no packletloss yet the problem here persisted 

after it happened i did a flush dns and then did a dns lookup on the dfs server that then cached the the new dns records

and this still didnt resolve the issue, still not able to access the drive

the actual error the user gets is

Windows cannot access -

Check the spelling of the name otherwise, there might be a problem with your network. To try to identify and resolve network problems, click diagnose. 

Any ideas?

Users machine running Windows 10

DFS server windows 2012 r2

Slow mapped drive windows server 2012R2 over vpn

$
0
0

Hello,

we have a windows server 2012R2 v6.3 build 9600 (fully patched since 23/07/2018) with a mapped network drive to a remote location over vpn. About 5 desktops have this mapping. THey all have the mapping over VPN to the remote server. All the desktops are in the same workgroup. The remote server is in a different domain

Everything worked fine since updating to latest build on 23/07/2018.
The mapped network drive is extremely slow on clients pc's. windows 10 pro fully patched.

when i copy a file from an rdp connection to a remote computer i have around 10 Mbit/s speed.

when i copy the file from the mapped network drive to the same computer i get a speed of 500 kbits/s. so i have a huge speed difference when accessing the mapped network drive.

How come this is so slow? It was way faster before updating the remote server.

best regards

wouter

DFS Writer VSS restore operation fails at pre-restore

$
0
0

I am attempting to perform a VSS backup and restore of the DFS writer on a server 2012r2 box.

When performing a VSS restore on the same VM as where the backup was created it restores successfully. The problem comes in when restoring to another VM. The only difference between the 2 VM's is the active directory domain name. The operation fails at pre-restore before even attempting to replace any files.

Taking a look at the writer metadata I can see there is a difference in the componentName attribute which is what appears to be blocking the restore. If I modify the component name to be the same then the restore works as expected.

<WRITER_METADATA xmlns="x-schema:#VssWriterMetadataInfo" version="1.2" backupSchema="323"><IDENTIFICATION writerId="2707761b-2324-473d-88eb-eb007a359533" instanceId="49c17128-495e-4db0-b73b-ea7fac30092b" friendlyName="DFS Replication service writer" usage="BOOTABLE_SYSTEM_STATE" dataSource="OTHER" majorVersion="0" minorVersion="0"/><RESTORE_METHOD method="RESTORE_IF_CAN_BE_REPLACED" writerRestore="always" rebootRequired="no"/><BACKUP_LOCATIONS><FILE_GROUP logicalPath="SYSVOL" componentName="FF6DE134-771E-49FF-9FF6-044ACB84A028-4BFA2380-DDB4-4B6D-8B42-8383DCC527D9" caption="SYSVOL Share" restoreMetadata="no" notifyOnBackupComplete="yes" selectable="yes" selectableForRestore="yes" componentFlags="0"><FILE_LIST path="C:\Windows\SYSVOL" filespec="*" recursive="yes" filespecBackupType="3855"/></FILE_GROUP><EXCLUDE_FILES path="C:\Windows\SYSVOL\domain\DfsrPrivate" filespec="*" recursive="yes"/><EXCLUDE_FILES path="C:\Windows\SYSVOL\staging areas\steve.2012r2ad.com" filespec="*" recursive="yes"/><EXCLUDE_FILES path="C:\Windows\SYSVOL\domain\DfsrPrivate\ConflictAndDeleted" filespec="*" recursive="yes"/></BACKUP_LOCATIONS></WRITER_METADATA>

What would be the best approach to prepare the writer for a restore operation where there is a difference in the component name.


Windows Error Reporting - Event ID1001

$
0
0

Hello All,

In one of the server am getting "Windows error reporting" event (mostly 4 alerts per hour). Can anyone confirm, these alerts are good to ignore or any way to suppress/resolve these alerts?

Please find the details information about the event.

Fault bucket , type 0
Event Name: WindowsUpdateFailure3
Response: Not available
Cab Id: 0

Problem signature:
P1: 7.9.9600.18970
P2: 80072ef3
P3: 00000000-0000-0000-0000-000000000000
P4: Scan
P5: 0
P6: 1
P7: 0
P8: AutomaticUpdates
P9: 
P10: 0

Attached files:
C:\Windows\WindowsUpdate.log
C:\Windows\SoftwareDistribution\ReportingEvents.log

These files may be available here:

Analysis symbol: 
Rechecking for solution: 0



vicky


File Server Resource Manager PowerShell module missing in Windows 8 RSAT

$
0
0

It appears that the File Server Resource Manager PowerShell module does not get installed with the final version ofRSAT for Windows 8 (not sure if twas this was the case with the preview versions or not). Everything else appears to be there as far as I can tell - dirquota.exe and the FSRM MMC snap-ins get installed. I tried toggling the feature in "Windows Features" but it did not help. 

Is anyone else seeing this? Is this by design or a mistake in the packaging?No FSRM PowerShell module in system32\WindowsPowerShell\v1.0\Modules

Thanks,

Doug

2012 R2 Deduplication Not Optimizing Anymore Files

$
0
0

We have a 2012 R2 server with a 16TB iSCSI drive mapped that has deduplication enabled. We are using it for long term archives of backups of Hyper-V virtual machines. Veeam is the application doing the backups.

It was working normally for about 2 months or so, but for the past 3 weeks it hasn't optimized any new files. It has been stuck at 233 files optimized for the past 3 weeks. During troubleshooting I saw there were about a dozen or so ddpcli.exe processing running. It wouldn't end them, so I stopped the deduplication service and it was stuck at stopping, but I let it run overnight and when I checked it in the morning it was up to 238 of 450 files optimized. That was two days ago and there hasn't been any more files optimized. I tried doing a stop-dedupjob to stop all jobs, then run a start-dedupjob -type optimization -full and let it run overnight but still no more progress. I also did a garbage collection and scrubbing. That freed up some space, but no more files optimized.

Any ideas on what else I can do?


Volume Deduplication

$
0
0

I have a virtual file server in my environment that has a drive filling up.  My usual process is to run WinDirStat to see what can be cleaned up before provisioning more storage. 

To my dismay - I found out that data dedupe is turned on for this volume!!  I can't actually see which users (this is a home directory share) are actually consuming the most data.  Is there anyway to tell when this was turned on and by who?  Is this ever auto enabled?

When I look in the server manager I see that there are "Deduplication Savings"; however, file explorer shows the disk at 80% capacity with "chunk store" taking up the majority of the actual space within a hidden System Volume Information folder.

I cloned the file server.  Next I disabled dedupe on the drive within server manager.  Next I ran the powershell comand:

Start-DedupJob -Volume"E:" -Type Unoptimization

This command causes the server to blue screen.

Any suggestions?


JLC

Timeout whilst running Get-StorageSubSystem Cluster* | Get-StorageHealthReport

$
0
0

When we run the Get-StorageSubSystem Cluster* | Get-StorageHealthReport command i get the following error on one of our clusters.

Invoke-CimMethod : Timeout
Activity ID: {6c63d452-e4e6-4270-a8ca-49514d62e5c7}
At C:\Windows\system32\WindowsPowerShell\v1.0\Modules\Storage\StorageScripts.psm1:3223 char:13
+             Invoke-CimMethod -CimSession $session -InputObject $sh -M ...
+             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : OperationTimeout: (StorageWMI:) [Invoke-CimMethod], CimException
    + FullyQualifiedErrorId : StorageWMI 3,Microsoft.Management.Infrastructure.CimCmdlets.InvokeCimMethodCommand

This command runs fine on our other cluster but for some reason we get the above error when running it on the other cluster.

Cheers,

Liam

Windows 2016 workfolder and file locks "adxloader.log"

$
0
0

Hello
a brief request for a best practices.
In the case of the problem, we use W2016 std with Workfolders, the Clients are W7 or W10 Prof. with Workfolders and Outlook 2010 or 2016.
Outlook always writes the adxloader.log to the directory structure that is in sync with Workfolders. As a result, there are constant problems with failed synchronization.

My question would be:
Is it possible to maintain an exclude list with the Workfolder so that the adxloader.log is not included? (preferred variant, because also usable for other similar cases)
Or should you routing the path of the adxloader.log? (would be suboptimal for me because it would bend something in the MS Office and might not be update safe.

Who could give me a hint?

Thanks for all.


Danke und liebe Grüße Oliver Richter

DFS namespace - Clients connect to random server

$
0
0

Hi,

I have created a namespace in my Domain \\domain\files, which points to a replicated share on 3 fileservers (in our 3 sites). Now, when a client executes net use x: \\domain\files, it connects to a "random" server it seems to me.

The 3 locations are recognized correctly in the DFS management for the 3 servers.

How can I diagnose further why that happens? What additional data do you need from me?

I want the behavior that all clients connect to their local file server (unless it's unavailable, then its ok to use another one).

Thanks!

VJP IT

Command line or Powershell cmdlet to replace ownership on folder

$
0
0

Hi,

Is there a cmd or powershell cmdlet for the below option? I used takeown /R but it does not take ownership on all subfolders. However, if I try with GUI, I get no error message and it completes successfully.  I have 100 of users folders to delete. Please help.

 "Replace owner on subcontainers and objects"

Thanks,

Umesh.S.K


Viewing all 13580 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>