Quantcast
Channel: File Services and Storage forum
Viewing all 13580 articles
Browse latest View live

Server 2012 - Volume Shadow Copy Service Errors - ID 12292 ?

$
0
0

Hi

i am getting

"Volume Shadow Copy Service information: The COM Server with CLSID {463948d2-035d-4d1d-9bfc-473fece07dab} and name HWPRV cannot be started. [0x80070005, Access is denied.
]

Operation:
   Creating instance of hardware provider
   Obtain a callable interface for this provider
   List interfaces for all providers supporting this context
   Query Shadow Copies

Context:
   Provider ID: {3f900f90-00e9-440e-873a-96ca5eb079e5}
   Provider ID: {3f900f90-00e9-440e-873a-96ca5eb079e5}
   Class ID: {463948d2-035d-4d1d-9bfc-473fece07dab}
   Snapshot Context: -1
   Snapshot Context: -1
   Execution Context: Coordinator"

errors, {463948d2-035d-4d1d-9bfc-473fece07dab} resolves to an AppID {9D884A48-0FB0-4833-AB70-A19405D58616}

=> WTSnapshotProvider

The Server 2012 has dedup going, McAfee AV, and IBM Trivoli TSM Backup s/w running. The last two likely to use shadow copies as well as the OS ?

=> Any ideas what the problem is or who is the process wanting to do this item ?

 

Under

HKEY LOCAL MACHINE\SYSTEM\CURRENT CONTROL SET\SERVICES\VSS\PROVIDERS

 there is 3 Microsoft names :

- Microsoft iSCSI Target VSS Hardware Provider
- Microsoft File Share Shadow Copy provider
- Microsoft Software Shadow Copy provider 1.0

NB: It is a vmware vCenter v5 VM but NO migration services etc are being requested, only the latest vmware tools installed.

Thanks

 

 


How can I enable Windows Recycle bin for other drives when a file deleted?

$
0
0

Hello.

I use Windows 2008 R2 file server and share my Drive D as a file server but when a user deleted a file from my share drive, It can;t move to the Recycle bin for recovery? 

How can I fix it?

Thank you.

FSRM does not show large files in report

$
0
0

Have recently set up an new windows 2012 r2 file server and now running some reports for the first time. I know there are lare files present by performing a search for large files but these do not get presented in the reports in FSRM. Where should I start my troubleshooting, are there any specific logs I can look at?

I have set up a report for a specific folder with large files in it via Storage Reports Management but get nothing in the report, no matter what I specify as size.

Any suggestions or ideas are welcome.

Thanks,

/M

Lost my home Directory Files,

$
0
0

I have a server in which stored the home directory files, I have since replaced it.  It's now taken offline and sitting idle.

I had redirected the my documents to their home folder.

Strange issues was when I browsed the folder I saw I directory full of My Document (folders) the only way I know which user it belongs to was to look at the security properties for each.

Anyhow I copied these files to the new home folders, on the new server which now appear as their users name ( I do notice mine has changed to "my Documents") so all except mine are under this particular username.

Problem is now my users are claiming much of their documents are out of date, this may have something do to "roaming" user profiles even though they are local.

I have a big problem now with lost files I've searched and searched and can't find them, I'm trying to revert to backup, in the mean time.

Would anyone know what may have happened or what is happening. I need to recover these files and prevent this from happening.

Thank You,


nambi

Copy to Mapped DFS Parent Folder Fails

$
0
0

Hi,

I am running  clustered 2012 R2 file servers using DFS namspaces for simplified/consistent paths. The namespaces are domain-based 2008-level. I would like to be able to map a drive to \\domain\namespace\folder on users' computer, then use access-based enumeration to show them which folders they have access to.  Mapping the drive works fine, and read access works normally, but when users go to drag and drop or copy-paste files onto the mapped drive, they receive a message that:

"X:\ is unavailable. If the location is on this PC..."

I've replicated this on Windows 8, 8.1, and 10 and would probably have given up, expect that the copies work fine when using the network path in Explorer (not the drive letter), "network location, or when using the copy command from CMD or PowerShell, running as the same user. Users can also perform regular read-writes to files on the mapped drive. This makes me think that there is not a permissions issue but maybe something at the Explorer level. 

Any thoughts or insights would appreciated. Accomplishing this another way is certainly on the table. 

Thanks!
Matthew


Matthew

DFS target folder removed from namespace, but users still accessing share

$
0
0

Hello,

I had 2 servers participating in a DFS namespace and recently added a 3rd server to it. I'm now ready to decommission the 2 shares on the first two servers, but I noticed that users are still accessing the files in these shares. I deleted these two server's target folder information last week, so only the 3rd server has been defined in the namespace since then.

The files in these shares get pushed to workstations via group policy and I've verified that they are referenced by the namespace in the group policy and not by servername\sharename. I also restarted the DFS services on one of the offending servers with no change.

Any ideas on why these files are still being opened on those servers no longer participating in the namespace?

Servers are Windows 2008 R2.

Thanks!

Dynamic disks in a storage pool

$
0
0

I need to examine a source machine to build a like target disk configuration. I have no problems until I create a dynamic disk allocated within a storage pool. E.g. -

    '100 GB raw disk' -> Storage Spaces -> 10 GB dynamic disk -> Disk 5 Partition 1 -> F:

I need an API way to map all 'real' disks to their logical volumes in detail. For basic disks in a storage pool, I can map all the way from the 'raw' disk to the volume and can replicate it.   For dynamic disks created within a storage pool, I can get to a MSFT_VirtualDisk easy enough, but the trail ends there.  MSFT_VirtualDiskToDisk only lists basic disks - the dynamic disks are missing.  There is not enough information within MSFT_VirtualDisk to relate to either MSFT_DISK or WIN32_Disk.  A query of MSFT_VirtualDiskToVirtualDisk is empty - no result rows.  There must be a connection somewhere as the disk manager sees the 'disk' come into existence.

Any ideas on how to make the last two jumps?  I cannot find anything useful in the Storage Management API (https://msdn.microsoft.com/en-us/library/windows/desktop/hh830612(v=vs.85).aspx). 

Thanks in advance,

Ed

PS - Yes, I know this is not the brightest disk configuration, but that is not my call to enforce, only to replicate.

Reputable JBOD vendors

$
0
0

With server 2016 coming down the pipe, most notably storage replica. I'm going to attempt a storage spaces deployment. I've gone through the "supported" manufacturers on the MS catalogue and seem drawn toward DataON. The major vendors (HP, Dell, Lenovo) are way too highly priced. we may as well stick with SAN technology.

I'm just interested in who people are using and having good success with. Thanks.


Unable to browse DFSR shares / namespace

$
0
0

Hello

I have an interesting situation that I am trying to get to the bottom of

I have a DFSR namespace setup - and several folder targets configured as part of that namespace. Lets call this\\contoso\dfs\shares

I have two accounts, my standard domain user account, and an administrative account

If I Browse to the DFSR share via my admin account - I can see all shares

\\contoso\dfs\shares\
 - Data 1
 - Data 2
 - Data 3
 - Data 4

If I browse to the DFSR share via my user account - I can only see two of the 4 shares

\\contoso\dfs\shares\
 - Data 1
 - Data 4

If I browse to the share directly using my user account - it works fine

\\contoso\dfs\shares\data 3

 - Some files

I have confirmed that my user account has the following effective permissions on\\contoso\dfs\shares\

Traverse folder / execute file
list folder / read data
Read Extended attributes
Read permissions

I have confirmed that my user account has the same effective permissions on Data 1 / Data 2 / Data 3 and Data 4 subfolders

Traverse folder / execute file
list folder / read data
Read Extended attributes
Read permissions

I don't understand what is stopping my user account from seeing the missing folders from the namespace, when the permissions are the same. The folder targets are not hidden shares

The folder targets are on different servers. I have checked the folder targets on each server and the permissions are set the same

I can confirm that if I browse to \\constoso\dfs\shares and\\contoso.microsoft.com\dfs\shares via my user account that the result is the same

Any assistance in relation to this would be appreciated

Convert Read-Only member on DFSR to read-write on Windows 2012

$
0
0

What happen when we convert Read-Only member on DFSR to read-write on Windows 2012?

Example:

- Server01 is read/write and DFS target points to that server.

- Server02 is read only member server which receive replication from Server01 but target is disabled

If we switch like put Server01 read-only with target disabled and Server02 read/write with target activated what happen? Who will be the primary?

"The sync server needs the user's current user name and password"

$
0
0

Hi,

I`m trying to publish Work Folders using Web Application Proxy based on this guide.

When trying to set up Work Folders on Windows 8.1 clients (1 domain-join, 1 in workgroup) I get the following error:

 [Window Title]
Work Folders

[Main Instruction]
There was a problem finding your Work Folders server

[Content]
Make sure that you have the correct Work Folders URL. If this problem persists, email your organization's tech support.

[^] Hide details  [Close]

[Expanded Information]
The sync server needs the user's current user name and password.
 (‪0x80c80300‬)

In the Web Application Proxy/Admin event log on the Web Application Proxy server I see the following logged at the time of the failed Work Folders configuration attempt:

Web Application Proxy received a request with a nonvalid edge token. The token is not valid because it could not be parsed.
Error: Edge Token validation failed. Failed to serialize JSON object. Exception: The input is not a valid Base-64 string as it contains a non-base 64 character, more than two padding characters, or an illegal character among the padding characters. . Token: eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6IlBsLU52VDhVRmNVaExURlNaWDhDdjh4UlRMbyJ9.eyJhdWQiOiJodHRwczovL1dpbmRvd3MtU2VydmVyLVdvcmstRm9sZGVycy9WMSIsImlzcyI6Imh0dHA6Ly9zdHMubGFiLmNyYXlvbi5jb20vYWRmcy9zZXJ2aWNlcy90cnVzdCIsImlhdCI6MTM4ODYyODgyNywiZXhwIjoxMzg4NjMyNDI3LCJ1cG4iOiJhZG1famFucmluZ0BjcmF5b25sYWIubm8iLCJ1bmlxdWVfbmFtZSI6ImFkbV9qYW5yaW5nIiwiZ2l2ZW5fbmFtZSI6ImFkbV9qYW5yaW5nIiwiaXNyZWdpc3RlcmVkdXNlciI6InRydWUiLCJkZXZpY2VpZCI6ImVmMTdiNmY5LTZkODctNDMwNS1iNjEwLTU0MzQwZTlmMWZkYSIsImF1dGhtZXRob2QiOiJodHRwOi8vc2NoZW1hcy5taWNyb3NvZnQuY29tL3dzLzIwMDgvMDYvaWRlbnRpdHkvYXV0aGVudGljYXRpb25tZXRob2Qvd2luZG93cyIsImF1dGhfdGltZSI6IjIwMTQtMDEtMDJUMDE6NTA6NDQuMzQyWiIsInZlciI6IjEuMCIsImFwcGlkIjoiMTY4RjNFRTQtNjNGQy00NzIzLUE2MUEtNjQ3M0Y2Q0I1MTVDIn0.ksBY9ESYGdOvtciexoM_0ow4Rds3bCer0wuJZmtP-c2JXjXIgVzWGu-F1Tg8zNuB1k9b9FzDq9ulUNjw6_KKsnuWKNpYl4HF3ahlLbjppoe_vIg48_l6Gkswk4NQUQ4xqZjlA_huB0FOPkrUfx_-oeZ5jYy7kk26TwyqGAx63Fa5EowJozlC3hkQrfEdVU_7hTM0cZnHcI9jcj5Ga5rjoA472aM82ZYA5JvzdYIzEPGOEVsP9DvPm_z-PUMMbnAsdKk77ZJtoz1q6_IKDOAiQhEIONS2d11m6lhcqQNt8aX43yi74ToCDhEuM2I3Ij97fJbh669kWQmOQqZ5xHxnDQ
Received token: eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6IlBsLU52VDhVRmNVaExURlNaWDhDdjh4UlRMbyJ9.eyJhdWQiOiJodHRwczovL1dpbmRvd3MtU2VydmVyLVdvcmstRm9sZGVycy9WMSIsImlzcyI6Imh0dHA6Ly9zdHMubGFiLmNyYXlvbi5jb20vYWRmcy9zZXJ2aWNlcy90cnVzdCIsImlhdCI6MTM4ODYyODgyNywiZXhwIjoxMzg4NjMyNDI3LCJ1cG4iOiJhZG1famFucmluZ0BjcmF5b25sYWIubm8iLCJ1bmlxdWVfbmFtZSI6ImFkbV9qYW5yaW5nIiwiZ2l2ZW5fbmFtZSI6ImFkbV9qYW5yaW5nIiwiaXNyZWdpc3RlcmVkdXNlciI6InRydWUiLCJkZXZpY2VpZCI6ImVmMTdiNmY5LTZkODctNDMwNS1iNjEwLTU0MzQwZTlmMWZkYSIsImF1dGhtZXRob2QiOiJodHRwOi8vc2NoZW1hcy5taWNyb3NvZnQuY29tL3dzLzIwMDgvMDYvaWRlbnRpdHkvYXV0aGVudGljYXRpb25tZXRob2Qvd2luZG93cyIsImF1dGhfdGltZSI6IjIwMTQtMDEtMDJUMDE6NTA6NDQuMzQyWiIsInZlciI6IjEuMCIsImFwcGlkIjoiMTY4RjNFRTQtNjNGQy00NzIzLUE2MUEtNjQ3M0Y2Q0I1MTVDIn0.ksBY9ESYGdOvtciexoM_0ow4Rds3bCer0wuJZmtP-c2JXjXIgVzWGu-F1Tg8zNuB1k9b9FzDq9ulUNjw6_KKsnuWKNpYl4HF3ahlLbjppoe_vIg48_l6Gkswk4NQUQ4xqZjlA_huB0FOPkrUfx_-oeZ5jYy7kk26TwyqGAx63Fa5EowJozlC3hkQrfEdVU_7hTM0cZnHcI9jcj5Ga5rjoA472aM82ZYA5JvzdYIzEPGOEVsP9DvPm_z-PUMMbnAsdKk77ZJtoz1q6_IKDOAiQhEIONS2d11m6lhcqQNt8aX43yi74ToCDhEuM2I3Ij97fJbh669kWQmOQqZ5xHxnDQ

Details:
Transaction ID: {5a8eb154-06f4-0001-b1c3-8e5af406cf01}
Session ID: {5a8eb154-06f4-0001-b1c3-8e5af406cf01}
Published Application Name: WorkFoldersDiscovery
Published Application ID: EC64138D-1A34-CCF6-D02B-274FDB20D556
Published Application External URL: https://workfolders.contoso.com/
Published Backend URL: https://workfolders.contoso.com/
User: <Unknown>
User-Agent: MS_WorkFoldersClient
Device ID: <Not Applicable>
Token State: Invalid
Cookie State: NotFound
Client Request URL: https://workfolders.contoso.com/sync/1.0/discover/serverurl
Backend Request URL: <Not Applicable>
Preauthentication Flow: PreAuthWindowsStoreApp
Backend Server Authentication Mode:
State Machine State: Idle
Response Code to Client: <Not Applicable>
Response Message to Client: <Not Applicable>
Client Certificate Issuer: <Not Found>

For testing purposes I put workfolders.contoso.com in the hosts file of the test-clients, pointing to the IP of the Web Application Proxy. If I remove the hosts file entry, I can configure Work Folders without any issues. Thus Work Folders on the file server as well as authentication on the AD FS server is working fine.

Any idea what might be the problem?


Storage Tier - "There are no groups of available disks"

$
0
0

So I've just physically inserted 4 brand new disks into a Dell server. I can see them as ready in Dell OpenManage, but when I got to create a storage pool its not picking up my primordial disks.

Do I need to go into the hardware SATA/RAID controller and configure them maybe? Just as a 1 disk RAID1 for each disk or something?

publish file server to internet

$
0
0

Hi,

i have 2 file server with DFS ( namespace and replication),

i want to create access for my users to their files via internet,

how i can do it?

can i use another server with only namespace role and publish it to internet?

Deduplication error regarding Bloom filter, The index is filled to capacity.

$
0
0

I started getting this error:

Error: Bloom filter initialized for 15427113 elements is full, 0x80565325, The index is filled to capacity.

then it went over the weekend - a little concerned as to the meaning of it. I have logged a call with PS, but wanted anyones thoughts on it.

Running on a 40TB Dell NX Powervault, Storage Server 2012. We fire our monthly file server backups at it (2 day restore jobs from DPM) then dedup compacts it all very efficiently.


Mark

FileServer share permission managing

$
0
0

Hello!

We have server based on Windows Server 2012. It has FileServer role installed. 

We have shared folder, where are many other folders. Users see only those folders, that are available for them, because of setting. (File and Storage Sevices -> Shares -> Settings -> Enable access-baces enumeration)

So when I give permission to user for some folder in this share, after log off / log on or reboot he starts to see this folder.

So I want to do this without log off or reboot. How I can do this faster, without user interruption? 

Thank you!


Two 2012 R2 servers can't "see" a share from each other

$
0
0

Hello,

We have a strange problem with 2012 R2 servers (located in the same VLAN, normally without restriction between them).

- Server A can see all the shares from server C, server C can see the shares from server A

- Server B can see all the shares from server C, server C can see the shares from server B

- Server CANNOT see the shares from server B, AND server B cannot see the shares from server A !

I have tried to access with the IP address, same sympton

Any idea or suggestion?

Thanks already to everybody.
Serge


Serge

Large number of open file handles on fileserver (Server 2012R2) after user's rds session has long been terminated through log off

$
0
0

Hi,

we do have an rds farm with roaming profiles (not profile disks) consisting of 6 terminal servers ("srvts001" to "srv006" 2012 R2), 2 domain controllers ("srvdc01", "srvdc02" 2012 R2), 1 SQL/Fileserver ("srvsql01" 2012 R2).

Roaming profiles are stored on srvsql01 in share RDS. Oplocks are disabled, directory caching is disabled.

Everything was running without problems for 1 or 2 years. Recently (about 2 weeks ago) users began complaining about being logged on with a temporary profile. Investigation showed a lot of open file handles from previous (no longer existing) sessions on the file server.

E.g. user1 had logged on to rds one day and logged off in the evening (not just disconnected, really logged off) and when he logs in the other day on the rds farm he is logged on with a temporary profile due to the profile service being unable to access the roaming profile as it is (event log) "in use by another process".

Looking at the fileserver's open files we can see that a lot of files of that user profile (and of other users, too!) are shown as open - all with read option and no apparent locks.

When user1 loggs off these locks stay! Manually closing the open files on the file server allows the user to log on to the rds servers normally. Subsequent logging off may or may not create those stale open file handles.

We have no clue as to when these enormous amount of open file handles (we talk about 100s of files per user) happen. It seems to be at random and not hitting every user.

Has anyone ever met a similar problem? Or at least an idea on how to prevent logged off users to still have open files on the file server?

Any answer pointing in the direction of a possible solution or source of this problem is greatly appreciated!

Widows Search not working with Data Deduplication?

$
0
0

Hi,

I noticed that many files were missing when searching for them on my Server 2012 Fileserver.

After some troubleshooting I noticed that the ones missing, had a Size on Disk of 4KB and the "SparseFile" and "ReparsePoint" (PL) Flags set.

So it looks like they were processed by the enabled Data Deduplication.

Am I missing something here or is it really the case that deduplicated files cannot be indexed by windows search?

DFS - The namespace cannot be queried. Element not found.

$
0
0

I recently ran dcpromo on a DC to demote the server. The DC had been replicating with a second DC, which was able to pull up and view the DFS shares with the DFS manager as was a file server that was able to view the shares as well and also hosted the shares configured in DFS.

Since demoting the first DC, whenever I try to pull up the DFS manager on any of these servers, I am able to see the namespace, but when I select it, I get the error - “The namespace cannot be queried. Element not found.”

How can I restore the namespace? It’s on a test network that does not have backups.

I only have a few shares configured within DFS, would it make more sense to just rebuild DFS? I’ve tried doing that already though with the same name, but can’t as the name exists. Could rebuilding cause a problem with my SYSVOL and NETLOGON shares?

Mount Points on Non-Clustered Server

$
0
0

A common volume design for servers running the 'Failover Clustering' component is to utilize mount points instead of exhausting the available 26 drive letters.  The clustering software allows you to define volume dependencies to ensure root volumes are online prior to mount points, I am curious as to the boot order for non-clustered servers with regards to drives and mount points as there doesn't seem to be a way to define volume dependencies for those volumes assigned as mount points.

Drives:

C:  OS
D:  Data
E:  pagefile
F:  mount root #1
G:  mount root #2

Mounts:

F:\mounts\volume5
F:\mounts\volume6
F:\mounts\volume7
F:\mounts\volume8

G:\mounts\volume9
G:\mounts\volume10
G:\mounts\volume11
G:\mounts\volume12

The overall goal is to provide a drive letter for each SQL instance installed on a consolidated server allowing for secondary volumes (data1,data2,logs,tempdb,backup,etc).  Since this is a consolidated SQL server hosting multiple development SQL instances, since it is DEV performance isn't a major concern, but I'd like to configure individual volumes to cut down on disk queuing and any related disk latency which would occur if we combined SQL data,log,tempdb datafiles.

Any and all thoughts appreciated !!

Thanks,

-chadwic

Viewing all 13580 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>