Quantcast
Channel: File Services and Storage forum
Viewing all 13580 articles
Browse latest View live

Do I need a server os? Do I need Active Directory?

$
0
0

Hello all! I just moved into a new place and am preparing to setup my network. I have around 2-3 desktops 2-3 laptops, 1 tablet and 1 phone. Then of course there will be guests as well.

I am trying to decide on a couple of things, should I use Windows Server OS or just a normal Windows 7 PC with file-sharing and such set up?

If I use Windows Server should I use Active Directory?

Which would be the better option for Windows Server is I do use it? I have access to Server 2008 and Server 2012. Personally I don't like Windows 8 and from what I understand Server 2012 is similar in its design but if it's really much better I'll learn it.

Sorry for so many questions but this is the first time that I've lived on my own and am able to do anything I want instead of having to worry about parents/room-mates and their PCs.

Thanks in advance!

-Mike



index $0 for file 25 during every chkdsk

$
0
0

Every time chkdsk runs I get the error index $0 for file 25.  This is a large file server so we really cannot bring the server down often.  A couple weeks ago the power company scheduled a power outage to replace equipment so we brought everything down except this file server.  I connected it to a generator and let the chkdsk run.  It ran for about 5 hours and showed deleting file 25 for about an hour then writing file 25 for about 2 hours then continued to run with no more errors.  After it finished I shut down the server and waited for the power to be restored.  Brought the server online everything seemed fine.  I have been receiving several errors "Ntfs(55) - The file system structure on the disk is corrupt and unusable.  Please run the chkdsk utility on the volume Internal Storage Array.

Server 2008 R2 Enterprise
dual 2.27Ghz Xeon Processors
Dell R510 server 
PERC RAID array 18.1 TB 
RAID 6

A timeout (30000 milliseconds) was reached while waiting for a transaction response from the VSS service.

$
0
0

When I perform "vshadow -ap d: f: e: h:" displays an error text: VSS_E_MISSING_DISK
Returned HRESULT = 0x800423f8

In Windows Logs, There are two error:
1、Event 12362,VSS
   An expected hidden volume arrival did not complete because this LUN was not detected.
    LUN ID   {1ad243aa-d6d9-4537-82f6-a1e2884e1b7a}
    Bus Type  0x0000000000000009
    Version  16
    Identifier Count 1
 Identifier  0
 CodeSet  "VDSStorageIdCodeSetBinary" (1)
 Type  "VDSStorageIdTypeFCPHName" (3)
 Byte Count 16
 
   Operation:
   Exposing Volumes
   Locating shadow-copy LUNs
   PostSnapshot Event
   Executing Asynchronous Operation

Context:
   Current State: DoSnapshotSet


2、Event 7011,Service Control Manager
   A timeout (30000 milliseconds) was reached while waiting for a transaction response from the VSS service.


question: 1、What is A timeout(30000 milliseconds) means?From which interface to which the interface?
          2、What a scene to be wrong about "Event 12362"?

Trouble with File Auditing events in shared folder

$
0
0

Hi Everybody!

I am trying audit access to a specific folder in a shared folder.

On the machine filesystem auditing has been enabled in like below:

The folder itself has the following permissions ('List folder / read data' is the important one).

This generates event 4663 if the folder is opened from the local machine, this is great.

The problem is, if the folder is opened from a remote machine, no event is logged.

Does anyone know why this could be occurring?  Is there some setting that also needs to be enabled?

Thanks


formatting a volume within disk management causes instability and host crash

$
0
0

We have a 3-node 2012 Cluster that is currently non-production.  We are using MPIO, the default Microsoft DSM, CSVs, Failover Clustering, an EMC VNX5400, and FCoE. 

The LUN is instantly recognized by the Windows Hosts.  Initialization seems to be fine.  Formatting (to create a CSV) causes disk management to freeze and eventually the host will crash.

Storage Spaces - volume with deduplication always flagged as "online filesystem check needed"

$
0
0

Hi,

I am using Storage Pools and we have quite some Volumes on them. 

I am noticing an warning to do an online check of a Volume on 2 volumes where dedup is activated. 

If I do the check, all is fine for about 10-30 minutes, and it's flagged again to be checked. It never sais it needs any repair. 

The odd thing is, other volumes without DEDUP don't show this behaviour. 
What could cause this error?

- Disk Space is enough available. 

- Logs don't tell a lot. We have quite some entries with low disc space, but as there are volumes with DPM on this machine, it's referencing DPM Volumes with low disc space warnings. (Info - no DEDUP activated on DPM Volumes as it's not supported). 

So basically why are deduped volumes (successful deduped with over 40 %), always in a state that needs a check?

Thanks

Patrick

DFS is not working anymore.

$
0
0

DFS was configured and working fine on Server 2008 STD, but now it's no longer replicating.  Here are some of the errors I am seeing:

---------------------------------------------------------------------------------------
- Due to the following error, the DFS Replication reporting mechanism cannot access the WMI (Windows Management Instrumentation) namespace to retrieve certain reporting information. Error ID: 0x80041002.

- DFS Replication cannot replicate with partner <server name> for replication group <domain>\<name space>\<share>. The partner did not recognize the connection or the replication group configuration. The DFS Replication service used partner DNS name <server name>, IP address <the server ip>, and WINS address<server name> but failed with error ID: 9026 (The connection is invalid). Event ID: 5012

AND

- The DFS Replication service is stopping communication with partner <server name> for replication group Domain System Volume due to an error. The service will retry the connection periodically.
 
Additional Information:
Error: 1726 (The remote procedure call failed.)
Connection ID: 580D7FC3-873F-48CC-AFC1-73E96DFADCE2
Replication Group ID: ACA5FC8A-AA2E-4D40-8ECC-3A0A8F45E5F8
---------------------------------------------------------------------------------------

I also have noticed that there is a disabled sysvol in the replication area which belongs to an decommissioned server.  Not sure if that would mess anything up or not since it is disabled (and not needed), but I can't get rid of it.

Deduplication setting on volume keeps reverting to 'Disabled'

$
0
0

Hi,

I have a 2012R2 Std server with 3 x Dedup volumes and one of them has reverted back to a Dedup setting of Disabled, twice now. This also clears all of the excluded folders in the dedup exclusion list.

Any ideas why this could be happening?


DFS Issue - [ERROR] Failed to execute GetOutboundBacklogFileCount method. Err: -2147217361 (

$
0
0

Hi There,

Can someone help me to resolve the above error. I am getting this error while trying to check the backlog.

C:\>dfsrdiag backlog /ReceivingMember:server1/SendingMember:server2 /RGName:folder/RFName:folder.

Thanks

Backup / restore DFSN access

$
0
0

Hi everbody,

I'm looking for a way to backup & restore the DFSN access. I've used to export / import an xml file of a namespace but i've detected that the explicits permissions set on a folder has been kicked-off and have been reset to the default permissions.

Can you help?

Regards,

ROBOCOPY Scripting

$
0
0

Hello Guys,

I have one situation to make back up with Robocopy.

Source Folder

destination folder

I want to make robocopy script here is my Condition.

* make backup from source to destination folder.

* Run this command every day with windows Schedular..(Think this i can do.).

* After First time of back, whenever Robocopy scripts run only copy changes to Destination folder.

* Main Problem which i have: If file is deleted at source folder then this file should not be deleted at the destination.

but if there is any new file which is created at Source folder then it should be updated at destination folder.

thanks

Ricoh Aficio MP C2051 Scan to Folder - Windows Server 2012 Error: Authentication with the destination has failed check settings

$
0
0

I have recently upgraded a clients servers to Windows Server 2012 & since doing so have lost the ability to scan to folder.

Both servers are domain controllers and previously on a 2008 domain controller I would have had to make the following change to allow scan to folder:
 Administrative Tools
 Server Manager
 Features
 Group Policy Manager
 Forest: ...
 Default Domain Policy
Computer configuration
 Policies
 Windows Settings
 Security Settings
 Local Policies
 Security Options
 Microsoft Network Server: Digitally Sign Communications (Always)
 - Define This Policy
 - Disabled

However I have applied this to the Windows 2012 server but am still unable to scan, possibly due to added layers of security in server 2012. The error on the scanner is Authentication with the destination has failed check settings.
I have also tried the following at the server:
Policies -> Security Policies
Change Network Security: LAN Manager authentication level to: Send LM & NTLM - Use NTLMv2 session security if negotiated.
Network security: Minimum session security for NTLM SSP based (including secure RPC) clients and uncheck the require 128 bit.
Network security: Minimum session security for NTLM SSP based (including secure RPC) servers and uncheck the require 128 bit
I have created a user account on the server for the ricoh and set this in the settiings of the Ricoh and verified everything is correct.

Are there any other things I have missed?

Offline Folder can not chaange the letter of a file name from capital to lowercase

$
0
0

Dear Expert,

we deploy the offline folder to end users, there is a strange issue happened now, if user tries to change the letter of a file name from capital to lowercase, the system will show "You need permission to perform this aciton". But other actions are fine, like, create, delete, rename, etc.

DFSR doesn't start replication

$
0
0

Hello. I've got some strange problem with DFSR. We have two servers with Windows 2012R2 and the disk about 12Tb for replication. About a week ago initial replication was working fine. But suddenly it stopped with event IDs 4010 and 2010. Reboot didn't help. I've tried to recreate the replication but the replication doesn't start at all. On the secondary member I see event 4102 that it is ready for initial replication but there is nothing on the primary member. In the debug log I see:

20140807 10:32:53.966 8700 SRTR   971 [WARN] SERVER_EstablishSession Failed to establish a replicated folder session. connId:{605DEC59-914E-4FE0-BE2F-4D7365187AC6} csId:{7191B1EC-050D-4DF7-97E3-EC35B5CD860D} Error:
+ [Error:9028(0x2344) UpstreamTransport::EstablishSession upstreamtransport.cpp:803 8700 C Content set was not found]
Error:9028(0x2344) OutConnection::TransportEstablishSession outconnection.cpp:510 8700 C Content set was not found]
Error:9028(0x2344) OutConnection::TransportEstablishSession outconnection.cpp:454 8700 C Content set was not found]

DC source time is another DC instead of PDC

$
0
0

Hi,

I check the source time of 4-5 DC and maybe just synch with PDC. Must of them synch with other DC instead of PDC.

I check the registry setting of all and they have:NT5DS , only PDC have: NTP

Why is that? Resynch or restart Windows Times don't help. What is the reason or the process that push them to see another DC?


Problem with DFS when enumerating large folders

$
0
0

Hi,

In our organization we have DFS in place. We don't use DFS-r replication. The DFS roots are hosted on 3 2012R2 domain controllers and we have 2 domain based namespaces (windows 2000 mode). We have a lot of issues when users/applications/servers are trying to open folders with lots of files in them (typically 10000+). When this happens DFS hangs organization wide. This causes a lot of problems for back-end applications which are constantly moving around files to different front-end systems. Most of them get exceptions resulting in a manual restart. Since our organization is running 24/24, 7/7, this is very ennoying. Especially for engineers who have to get up at night to restart those services.

All user folders/mapped network disks are using DFS to access the file server. We have more then 2000 employees accessing a DFS namespace. At the time the issue occurs, they also experience problems accessing their data. 

When the issues occur, we have no problems accessing the targets themselves, so we can exclude file server issues. We use a third party appliance as DNS server, but this also works as should at the time the issue occurs. The only way to resolve our issue that we have found so far is to clean out those folders, or when that's not possible, exlude them from the DFS namespace.

We thought upgrading our DC's from 2003 to 2012R2 would resolve the issue but that didn't do it. Is this a known issue and is there another workaround/fix? Also, is there a way to find out which target caused DFS to hang? We have already done a lot of troubleshooting with articles we found on the net but nothing seems to be wrong with our DFS setup/AD/DNS. We also opened serveral cases with MSFT but after the network trace analysis they always conclude there's nothing wrong.

Thanks in advance! 

File Server Resource Manager - create a customized File Management Task

$
0
0

Hello everybody!

I have experienced some troubles with the FSRM and customizing File Management Tasks under Windows Server 2012. I hope someone could assist me in this matter.

I found this Technet blog some time ago and my actions were based on it:http://blogs.technet.com/b/filecab/archive/2009/05/11/customizing-file-management-tasks.aspx

However, it doesn't want to work out quite how i want yet.

I would like to explain my situation first. We would like to archive our File Server, every file that hasn't been accessed for over 4 years should be moved to a different location. The folder structure as well as the permissions should be kept like they were on the file server (the only exception would be that write right on those files would/should be removed, because we don't want people to edit or delete archived files). We realized that this is not quite realizable with a normal file management task (with choosing "file expiration" in the action drop down). That's why we decided to make a script by ourselves and choose "customize" in the action drop down of the file management task in FSRM.

The problem we do have now is that the script doesn't get executed, we basically did the same thing like we were told to in that Technet blog without any success.

First of all we created an easy script in order to make sure our customized file management task works like it should. For that we just wanted to create a log file with the Source File Path of the work-related file, which is about to get archived in the task. The "%1" stands for the parameter [Source File Path]. The script looks like that:

echo %1 > C:\Users\Username\Desktop\log.txt
exit


I have attached a screenshot of the customized file management task as well:

Okay so far so good. If I run the file management task now, it somehow ends up in an endless loop, it doesn't do anything and it doesn't finish either. FSRM created a log file though. If I look at it I can see that every single file that matched the condition of last accessed 4 years ago, ended up in an error:

"Unable to run command: C:\Windows\System32\cmd.exe C:\Windows\System32\test.cmd [Source File Path]. 0x80045367, the file management action command timed out."

I have tried several ways of customizing the task already (absolute path, no absolute path, script file endings cmd or bat) nothing changed the result... Furthermore the script i am providing to test this customized task is fairly easy and working. So my assumption is that FSRM doesn't even execute the script?

I don't know where the error could be and i would really appreciate your help in form of troubleshooting or other suggestions/solutions how to achieve our goal.

Thank you!

Dario

The Date Accessed attribute is resetting to all the files in a folder

$
0
0

Hi all,

I have an issue that the "Date Accessed" attribute is often resetting to all the files in a folder if i open single file as this folder is from File server mapped as Drive to me. This is happening to all the users whoever connect to that fileserver and i see same issue from RDP as well. is there any option to check which is resetting the "Date Accessed" attribute to all the files. The Date Access attribute is simillar to Date accessed and Date modified. We need to fix this issue as it is related to security concern for us. The file server is windows 2008 R2 and the client machines are windows 7.

Side note: I can see that there is no software, antivirus, offline sycn is causing this issue.

Thank you,

Sampath


P.Sampath

DFS DFS-R and sysvol

$
0
0

Hello All,

I noticed recently that my sysvol is using FRS for replication but DFS use DFS-R ?Is there any impact will it slow down teh DFS replication process?


Roopesh Raj

Doesn't namespace service=namespace server?

$
0
0

I'm confused by something.  This is Server 2008 R2.  I was of the understanding that a DC that ran the DFS Namespace service was considered a Namespace Server. 

I just went into DFS Mgt and noted the namespace servers for our 2 namespaces.  I was then looking at a DC that was not in that list, and it is running the DFS Namespace service (the services is started).

Isn't a DC that runs the DFS Namespace service considered a Namespace server, in a domain-based namespace?
If it is running that service, what is a possible reason(s) that it is not in the list of namespace servers in DFS management?

Viewing all 13580 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>