Quantcast
Channel: File Services and Storage forum
Viewing all 13580 articles
Browse latest View live

File in use when disconnected from wireless/wired network - Network Shared Drive

$
0
0

Hi,

We're having an issue where we are getting a "file in use" message when disconnected from the network. To replicate the issue, what we have been doing is: 

1) Connect to the wired network with wireless disabled

2) Create a test document

3) Enter some text and save to network share

4) Remove network cable

5) Add additional text

6) Reconnect network cable

7) Try to save file back to network share

Initially clicking on the X brings up the message again do you want to save. Click Yes, and the message comes up again next time you hit the X. Hit No and it loses last change. 

From here we then go back in to the network share, open the file and it comes up with "File in Use. Open in Read-Only". File is open by us (administrator account), and there is NO temp file in the shared folder.

Tried so far:

- Running net config server /autodisconnect:-1 to try and stop it disconnecting the share 

- Checked details pane, preview pane and turned off "show popup description for folder and desktop items"

- Updated Windows

We're trying to replicate on Wired connection as it's more easily replicable than wireless disconnections. Seems to be affecting multiple OS's. Share folder is shared on a Windows 2008 R2 x64 machine.

Is this a known issue or is there something that we can configure to try and prevent it locking out the file after the disconnect? Ideally we want the drop to go unnoticed and provide no disruption to the user unless they try to save during the drop. 

Thank you in advance for comments :)


Is it Normal to See a Backlog from Non-Authoritative to Authoritative Partner During Initial Synchronization?

$
0
0

Added SERVER3 as the third member of an existing DFSR replication group. The other two members are SERVER1 and SERVER2. While adding SERVER3, I indicated that I only wanted it to replicate with SERVER2, and the initial synchronization started. There is a backlog from sending member SERVER2 to receiving member SERVER3 that started at over 600,000 three days ago, and has since then decreased to just under 300,000. As that backlog continues to decrease, a backlog is growing in the opposite direction (smem SERVER3, rmem SERVER2).  It stands at just over 17,000 and continues to grow slowly.

it seems strange that there would be any backlog at all from SERVER3 to SERVER2, considering there was no pre-existing data.

Items at the top of the 17,000 count backlog aren't changing, and this seems to correspond with an error I'm seeing in the debug log:

[Error:9051(0x235b) OutConnection::TransportEstablishSession outconnection.cpp:449 7512 C The content set is not ready]

My interpretation of this is that SERVER3 is waiting for the initial synch to complete before sending anything back to SERVER2.

Any ideas?

Idea: Storage Spaces Tier & Resiliency Settings

$
0
0

Anyone else experimenting with Storage Spaces and find that the Tier should be where Resiliency Setting are defined?

It would make sense to have Tier's will various resiliency settings.
(Write-Intensive SSD) - Raid 1/10
(Write-Intensive SSD) - Raid 5/6/50/60
(Read-SSD) - Raid 1/10
(Read-SSD) - Raid 5/6/50/60
(7k HHD) - Raid 1/10
(7k HHD) - Raid 5/6/50/60

Command might look like this:
New-StorageTier -StoragePoolFriendlyName "Archive Storage Pool" -FriendlyName "(Write-Intensive SSD) - Raid 5/6/50/60" -MediaType SSD -ResiliencySettingName Parity -PhysicalDiskRedundancy 2

Maybe one would need to include what disks in the pool to use for this Tier also similar to adding disks to the StoragePool but it would require them to be a subset of the disks already in the pool.

Then you could mix multiple Tiers with a Virtual disk to achieve various performance requirements or storage reliability/capacity/usage requirements.  If data is not used it could move through various Tiers to ultimately land on the slower capacity based Tier.

If you fill Tier 1 space then data could be written to HHDs with Raid 1/10 and still get better performance than standard HHD with Raid 5 or Raid 6 and then later as data is progressed it can be moved to Raid 5/6/50/60.

I heard that Parity is not supported with Tiering also but when I tried it via the command line there was no objection to the configuration so perhaps that either old data floating around or its just the GUI that does not offer Parity with Tiering?

Just some thoughts.

Missing updated information in a file

$
0
0

Windows 7 workstation, Windows Server 2012

Issue:

Workstation A updates a file and saves the file on Server in a shared folder, Workstation B access the file from the server and doesn't see the updates that Workstation A made. This is intermittent. Other workstations on the network do not have this issue.

Testing:

1) Verified both workstations and user logins are pointing to the same location.

   a) Created a file in shared location from each workstation and each workstation can see the new file and access the file

2) Workstation A copies the file to desktop and deletes the file from the server and workstation B can't see the file.

       Workstation A then tries to copy the file back to the location and gets a message asking if they want to overwrite existing file. Workstation A searches the shared folder an cannot find the file. Workstation B now sees the file that Workstation A deleted. However Workstation A cannot find the file. 

Any help or direction to look at would be appreciated.

Work Folders instead of DFS-R

$
0
0

Was interested to see if I could use Work Folders instead of DFS-R to sync folders between 3 locations.

Site A Datacenter would have Folder Site B and folder Site C

Site B would have Folder Site B which replicates with Site A

Site C would have a folder Site C which replicates with Site A

Users connecting to Site A would access files only from site A using namespace

Users in site B would access site B folder locally and site C folder from Site A both via the namespace path.

Users in site C would access site C folder locally and site B folder from Site A both via the namespace path.

I am curious if there would be issues with file locking as we have with DFS-R which is no file locking options.  Not to mention if a file change is saved and the information is sent to the Jet Database to sync and someone modifies the file it can create propblems requiring IT to get involved.

Work Folders - How many dedicated servers do I really need for internet based access

$
0
0
I want to deploy Work Folders with external access.  I have deployed it internally which only requires 1 DC and 1 Work Folder/File Server.  Based on what I have been reading I need to deploy 4 servers which seems crazy for this when looking at small environments.  Any way to combine these functions into less servers?

When attempting to connect to network shares, why am I being told to "Check the spelling of the name." with Error code: 0x80004005?

$
0
0

Last night I upgraded my machine to Windows 8 Pro RTM.

Everything seemed to be going well, but then I tried to access a network share.

I get a window that says:

____________
Network Error

Windows cannot access \\{server redacted}\{path redacted}\

Check the spelling of the name. Otherwise, there might be a problem with your network. To try to identify and resolve network problems, click Diagnose.

Error code: 0x80004005
Unspecified error

____________

 

Clicking "Diagnose" doesn't find any issues.

One of our networking guys had the same problem when he signed in to my machine. When I signed into his Windows 7 box I was able to access the network resources just fine - so it doesn't seem to be a user account issue.

I can access some network resources but not others.  For example, I can connect to Exchange, I can connect to our TFS server. I can even resolve the server name when I ping it.  If I enter the IP of the server instead I get the same error.

My networking guys are stumped.


"We're all in it together, kid." - Harry Tuttle, Heating Engineer

Event id 4502 -- dfsr

$
0
0

Hi,

We have  customer servers which has DFSR configured. There are two folders D:\Home and D:\Users which are under DFS replication. We were seeing the event id 4502 daily for the above folders. We did change staging folder path to different location (off Home and Users folder). The new path is D:\DFS\Home\Dfsrprivate\Staging and D:\DFS\Users\Dfsrprivate\Staging. We did set staging quota size to 50 and 150 GB respectively. After this change, we don't see this event id 4502 for D:\Home folder. However this continues for D:\Users folder. DFSR diagnostic reports shows the same. It points to d:\Users folder.

In these servers, FSRM is configured and hard disk quota is set to 1 GB on both D:\Home\* and D:\Users\*. Since we have changed the staging folder path from default location(off D:\Home and D:\Users), we thought it would resolve the issue. But no luck for D:\Users. As per the below link, I tried to run the command lines to get first 32 largest file size, but it fails with Access is denied. Not sure how to get the information. Please help me to resolve this issue. Let me know what information is required.

https://blogs.technet.microsoft.com/askds/2011/07/13/how-to-determine-the-minimum-staging-area-dfsr-needs-for-a-replicated-folder/

Event id details below:

Log Name:      DFS Replication
Source:        DFSR
Date:          2016-09-03 06:00:05
Event ID:      4502
Task Category: None
Level:         Warning
Keywords:      Classic
User:          N/A
Computer:      xxxxx.sna.wm.net
Description:
The DFS Replication service encountered errors replicating one or more files because adequate free space was not available on volume D:. This volume contains the replicated folder, the staging folder, or both. Please make sure that enough free space is available on this volume for replication to proceed. The service will retry replication periodically.
 
Additional Information:
Staging Folder: d:\DFS\Users\DfsrPrivate\Staging
Replicated Folder Root: d:\Users
Replicated Folder Name: Users
Replicated Folder ID: 4513DC92-29BB-4408-B785-D9A528EBA7F8
Replication Group Name: xxxx\shares\users
Replication Group ID: 6E2B8CFC-902D-4676-99FD-9C1C94E30386
Member ID: 81DA8BD9-2309-48EC-88BD-830E4CE3E1BD
Volume: DF7BBC15-BFD4-11DE-B6B2-005056914A38

-Umesh.S.K



Storage Spaces and 2 Samsung 840 SSDs

$
0
0

Hello Everyone,

I have an issue that i need assistance on.  In Server 2012 R2 using Storage Spaces it seems to be unable to differentiate between 2 SSDs of the same model.  It sees them both but if i set one to "Offline" it thinks the other is offline as well but still lists it as online.  It also only shows one in the pool and not both.  If i place a different SSD in (830 series) it can differentiate and create a pool.  I have tried everything i can online about this but still no solution. Currently i have changed and am using then in a hardware RAID 1 off the LSI 2308 controller (performance is not as good).

Thanks

Increase in DEDUP RATE after migration. Why?

$
0
0

I had an OLD FileServer, WIN2012R2, 3 TB Volume with Dedup enabled and average rate of dedup it was 49%-52% over tha last 3 years. IN 2016, the free space of the volume is going down and in january we had 10% freespace and yesterday it was less than 1% of free space on the old file server last week

We bought  a new FileServer, Win2012R2, 10TB volume and i´ve migrated all 5.5 MILLION of files this week

Now the dedups saving rates is 72%!!!!!!!!

If the files are the same in both volumes, how could the saving rate grown up so much?

Maybe because there is more disk space avalable? 

What i was expecting is the same saving rate for both servers, because the set of files and folders were the same. But after a week preparing the migration, copying files 2 weeks ago in full mode and in the last 5 days doing incremental copies, why does the saving rate is so higher than earlier?


Security permissions and ownership of the files are getting automatically removed for shared folder

$
0
0

Hi,

I have an issue in one of the folder, when creating new file under shared folder or updating the same intermittently it creates permission issue and unable to access as ownership gets changed.Security permissions and ownership of the files are getting automatically removed in the File share server.

Kindly suggest if any pointers.

Regards,

Deepak S


Regards, Deepak Sharma

Duplicate a lun and present to new server=loss of NTFS permissions?

$
0
0

I have a disk that I am presenting to a 2008R2 cluster. I duplicate that disk on the SAN, so I am basically presenting that exact same disk to 2012R2 server.  I am then losing access to all the folders on that disk, even though tt shows I do have the correct permissions are the root of the disk.  Is there some kind of disk signature problem when you move a disk from a 2008R2 server to a 2012R2 server?  If I seize ownership, it then shows the correct permissions from the old 2008R2 server and then I have access without having to make any changes in the ACL.  Retaking ownership  would not work as it would take days because these are very large luns.  Any ideas?

Please do not guess on this, if you do not know.  I am not looking for threads about move vs. copy NTFS permissions.  Usually, even if you are using the default 500 account for ownership or for administration, I thought that would come across to the new server which would also use its default 500 account. 

Thanks,


Dave




Moving shares with all permissions

$
0
0
Hello everybody. I have Server 2008 R2. We using it as file Server only.There in no Active Directory setup. There is no domain - network setup as workgroup. I need to move all shares with permissions from small hard drive to a much bigger, but leave OS on small drive. Is there safe way to do it? Thank you for any help

Sam Goykhman

Windows 2008 R2 DFS replication

$
0
0

Hello All,

Which DFS replication is a best solution to manage, to take backup, for redundancy with namespace and multiple folder targets.

1> Replicate a single root folder (with all department folders and files) having 1 TB of data to another server? Single replication group and multiple namespace folder targets

2. Replicate multiple folder by providing multiple folder target and replication. In this way, there will be multiple replication groups.

I face issues earlier, with single root folder replication, after the replication completed, all users start complain, the files are opening read only... I did not added second server into namespace that time.

Any advise?

Thanks

Prabodha


Network location - Ramsomware

$
0
0

We are using to access shared folders the  map drive option. But, it was infected by ransomware. We want to use network location like an option. Has been it infected by ransomware ?

Does we have other options? We cannot pay for NAS.

Thanks in advance. 


Folder for Each domain user on a hard drive

$
0
0

Hello, 

I was wondering if I can have a 6TB HDD on my server and each domain user have like 500GB each and a folder that only they can use. 

Thanks!

Server 2012 Filename Too Long

$
0
0

Hi

Recently I replaced a branch office server migrating the services from 2008 R2 to 2012 R2. One of the services is the File Server role. The server is a Hyper-V guest with a file storage area on its own .vhdx. When I migrated the files from the old 2008 R2 server, rather than reconnect the old .vhd, I decided to create the new one as a .vhdx and use DFSR to replicate the folders and files.

All the migration occured without a hitch and the other server has been decommisioned. However, it's come to light that files on long file paths, that could be opened on the 2008 R2 server can no longer be worked with on the 2012 R2 server.

No matter whether we try to open, copy, move or even delete the file, we get a "Filename too long" error.

Now, the file paths are too long admittedly. For example, the file the office is particularly interested in accessing is 283 characters in length including the filepath. I've advised they rename folders further up the tree to bring it under 255 characters but I'm interested in finding out why they were accessible on the 2008 R2 server and not the 2012 R2 server.

Any insight would be greatly appreciated.

David

Limit SMB for domain logged user only

$
0
0

Hi.

We want to deploy an SMB sharing on one server which allow only domain logged user to access.

If a user without domain logging want to access, server should deny it without asking for a domain user and password. Any user, has a domain user and password or not, did not login to desktop through it, cannot access to this sharing.

It that possible?

Thanks.

Shared Folders Session Super Slow, even Freezing, Server 2012R2

$
0
0
Hello, I am using a "Windows Server 2012R2" and I am using it as a scale out file server. When I navigate into the "Computer management / System Tools / Shared Folders / Sessions" the listings are populating super slow, freezing, and I pretty much can't have any interaction with the console at all because of the crawling speed. What can be the issue to that? I have over 100 sessions run on this Hyper-V enabled SOFS server, but i would assume that this should not be a problem for Windows Server 2012R2 Datacenter edition.

modify rights user folders

$
0
0
built a new server for our network and moved our user folders over. everyone can access their folders however nobody can modify.  Each folder has modify/read & execute/list folder contents/read/write permissions checked, but if opening a document to modify changes it will not save, a read only error box is displayed. Is this an inherited problem from the top level folder?
Viewing all 13580 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>