Quantcast
Channel: File Services and Storage forum
Viewing all 13580 articles
Browse latest View live

Windows Admin Center 1910 / Storage Migration Service Download of error protocol failed

$
0
0

Hi,

last night, the 1st run of migration from our old fileserver to new one (Srv2019) was finished.

During run, several errors with different files occured, so we want to download the error protocol, but we can´t.

The downdoad runs into timeout, within the configured timeout of 1min, the download was not finished, as stated in the log message here:

Vergewissern Sie sich, dass Port TCP 445 (SMB-Dateifreigabe und -Druckfreigabe) auf dem Orchestratorserver geöffnet ist. (Check, that Port TCP 445 SMB File and Printer Sharing was opened on the orchestrator server)
Fehler: This request operation sent to net.tcp://localhost:28940/sms/service/1/transfer did not receive a reply within the configured timeout (00:01:00). The time allotted to this operation may have been a portion of a longer timeout. This may be because the service is still processing the operation or because the service was unable to send a reply message. Please consider increasing the operation timeout (by casting the channel/proxy to IContextChannel and setting the OperationTimeout property) and ensure that the service is able to connect to the client..

I just want to change the timeout setting as I found a note here:
https://social.msdn.microsoft.com/Forums/en-US/dc4a6bdc-4dd7-4e68-b24d-cd83a3bfece5/nettcp-operationtimeout-question?forum=wcf
, but I couldn´t find any config file for Admin Center, where I can put it in.

Anyone here, who got the same problem or knows about the config files of WAC/SMS?


Recovery after ReFS events 133 + 513 (apparent data loss on dual parity)

$
0
0

Hi,
I have a single-node Windows server 2016 with a dual parity storage space, on which a bitlockered ReFS volume resides with enabled file integrity. This ReFS volume hosted/contained a ~17TB vhdx file with archive data since its setup half a year ago. This file has now suddenly been removed by ReFS! More precisely, I see the following two events in system log:

  1. Microsoft-Windows-ReFS Event ID 133 (Error): The file system detected a checksum error and was not able to correct it. The name of the file or folder is "R:\Extended Data Archive@dParity.vhdx".
  2. immediately followed by Microsoft-Windows-ReFS Event ID 513 (Warning): The file system detected a corruption on a file. The filehas been removed from the file system namespace. The name of the file is "R:\Extended Data Archive@dParity.vhdx".


I have the following questions:

  1. As 26TB are still used on volume level (constant, not decreasing over time), but only 7TB of files are visible, I assume that ReFS did not yet delete the missing vhdx file.How can I get read access to the corrupt vhdx file again for manual recovery of itsinternal file system?
  2. If I understand dual parity correctly, at least two physical disks must have failed simultaneously for this to happen. I do not see any useful events in the system log regarding this. How can I get any clues, which of the physical disks in my array need to be replaced?
    (Their SMART level is 100%. I plan to run extended SMART self tests on each individual physical disk, but only after data recovery. Still, Windows or ReFS might have logged some clues about the physical disks involved in this checksum error somewhere?)

Thanks.


DFS replication cause missing files

$
0
0

Site A                                                                                              Site B
File Server A  <--- 2 ways synchronize using DFS replication --> File Server B
Both File server in the same Domain: abc.com  
Both File server is using Windows 2008 server

I only setup the File Server A as refereral targer folders. No referral folder was enable for File Server B. By right, all data in File Server A will synchronize and overwrite the data in File Server B.

Unfortunately, from time to time, some user reported their files goes missing. The worst is, some user even just modify their file and press save then close. When they want to go back to the file again in few minutes, the file is missing. After further troubleshooting, I found that those missing file was automatically move to a hidden folder called DfsPrivate\ConflictAndDeleted. Then I have to manually move those files back to original place. I noticed under the DfsPrivate\ConflictAndDeleted there are at least 2 to 3 files been moved to this area everyday.

May I know why the files all of the sudden moved to this DfsPrivate\ConflictAndDeleted? The question is, there is no one using the File Server B folder to make changes on any files. Why it consider as conflict and deleted. Can anyone help? I don't want to keeps moving files which is not practical for long run.

Sample detail in the ConflictAndDeletedManifest.xml
- <Resource>
  <Path>\\.\X:\Share\PC\grpshare\MktShare\GMSHARE\- Park Place\01 Price Record Status\STATUS_TWO_PARK_PLACE.xlsx</Path>
  <Uid>{11B01F99-5838-497A-956C-90BDA9ACD6ED}-v357978</Uid>
  <Gvsn>{11B01F99-5838-497A-956C-90BDA9ACD6ED}-v358015</Gvsn>
  <PartnerGuid>{C05F61BD-FBAD-4E3F-B567-FFE7638BCF38}</PartnerGuid>
  <Attributes>20</Attributes>
  <NewName>STATUS_TWO_PARK_PLAC-{11B01F99-5838-497A-956C-90BDA9ACD6ED}-v358015.xlsx</NewName>
  <Time>GMT 2012:1:22-21:22:20</Time>
- <Type>
  <NameConflict />
  </Type>
  <Files>1</Files>
  <Size>95497</Size>
  </Resource>



Error - All disks holding extents for a given volume must have the same sector size, and the sector size must be valid.

$
0
0

I have looked up the error and found it to be this error code but I cannot find a solution for it. 

0x80042530 is the Error Code.

I have a server using Mirrored drives and I am trying to add newer larger hard drives to the system and after swapping out one of the drives and adding the new hard drive I am getting this error message; "All disks holding extents for a given volume must have the same sector size, and the sector size must be valid."

are the new drives (2TB) just to large for the OS to handle or is there something that I can do to be able to use these large hards I have purchased to upgrade the storage of this server.

Thanks

Ray

 

Event ID 10000 - Unable to start a DCOM Server.

$
0
0

Installed a fresh 2019 standard server with Microsoft server backup (vm on a 2019 hyper-v server).  Server was clean to boot with the normal warnings...  Installed Active Directory and promoted to be an Active Directory Server.  After the promotion we are getting the following error once during boot and at the start of a windows server backup.  The backup actually completes and has been tested to restore successfully.

Event ID: 10000
Source: DistributedCOM
Event Data: Unable to start a DCOM Server: {9C38ED61-D565-4728-AEEE-C80952F0ECDE}. The error: "0" Happened while starting this command: C:\Windows\System32\vdsldr.exe -Embedding

Any ideas on what is going on?  I have dealt with DCOM 10016 issues before and had to set permissions but this one is a little different.  I was headed down the path that since the local administrator is not there anymore after promotion that it is a permissions/account issue somewhere but haven't found anything there. 

Also, what is error: "0"?

WSE 2016 VM - Windows Search problems

$
0
0

Hi all - 

Brand new Windows Server Essentials 2016 VM on Server 2019 Hyper-V, brand new Dell T340 - all SSD storage, very nice.

Windows Search is crapping out. Indexing about 600,000 items on two volumes, it works for a minute, and then searches return an unclickable list of files with white generic icons and no data. Restarting the service works for a while. I've reindexed a bunch.

Any thoughts or ideas? Thanks!

After Extending Volume Volume Capacity not showing correctly

$
0
0

I have a Server 08 Standard SP2 virtual server running on VMWare. 

It has one hard drive that was running out of space. I increased the the size from 75 gb to 95 gb in VMWare.

I then went to server manager and extended the C:\ drive to use all the available space. 

Now the Volume capacity is showing 75gb and the Volume size show 95gb

I am not sure why it didn't increase the capacity with the disk size 

Below is my Diskpart output.

VMware Virtual disk SCSI Disk Device

Disk ID: 81BA8769

Type   : SAS

Bus    : 0

Target : 0

LUN ID : 0

Read-only  : No

Boot Disk  : Yes

Pagefile Disk  : Yes

Hibernation File Disk  : No

Crashdump Disk  : Yes

 

  Volume ###  Ltr  Label        Fs     Type        Size     Sta

  ----------  ---  -----------  -----  ----------  -------  ---

* Volume 2     C                NTFS   Partition     95 GB  Hea

 

DISKPART> detail part

 

Partition 1

Type  : 07

Hidden: No

Active: Yes

 

  Volume ###  Ltr  Label        Fs     Type        Size     Sta

  ----------  ---  -----------  -----  ----------  -------  ---

* Volume 2     C                NTFS   Partition     95 GB  Hea

 

DISKPART> detail vol

 

  Disk ###  Status      Size     Free     Dyn  Gpt

  --------  ----------  -------  -------  ---  ---

* Disk 0    Online        95 GB      0 B

 

Read-only              : No

Hidden                 : No

No Default Drive Letter: No

Shadow Copy            : No

Dismounted             : No

BitLocker Encrypted    : No

 

Volume Capacity        :   75 GB

Volume Free Space      : 5101 MB

 

DISKPART>

 

 

I need to get this expanded out before it runs out of space. Any help would be appreciated. 

 


It's not the load that breaks you down it's the way you carry it. ~Lou Holtz~

New DeDup parameters

$
0
0

Hello,

anyone any idea what the Job Type "DataPort" or the parameter "FastStart" do and how they are used? Nothing in online help and nothing so far to find searching the internet.

Thanx & cheers

__Leo


Ransomware in Winodows Server 2008 R2

$
0
0

Hi

Our server is affected with ransomware but one shared drive few folders are encrypted , we deleted folders from shared drive and copy from previous version, few folder copied but automatically all previous version disappeared.

i doubt few ransomware executable file still available in server.

is any tool available to check ransomware executable files or application?

so that we can remove it.

Note:- This server is domain controller.

    


Arvind

Server 2019 - Event ID 10000 - Unable to start a DCOM Server {9C38ED61-D565-4728-AEEE-C80952F0ECDE}

$
0
0

Problem:

This error is periodically recorded in the System event log:

Event ID: 10000
Source: DistributedCOM
Event Data: Unable to start a DCOM Server: {9C38ED61-D565-4728-AEEE-C80952F0ECDE}. The error: "0" Happened while starting this command: C:\Windows\System32\vdsldr.exe -Embedding

This error appears when DFS-R is installed, when the computer is rebooted, every time the DFS-R service is started, and whenever application aware checkpoint-based backups are taken.

Simple reproduction steps:

  1. On a clean computer or VM, boot from the Microsoft-provided Windows Server 2019 media. I used SW_DVD9_Win_Server_STD_CORE_2019_1809.1_64Bit_English_DC_STD_MLF_X22-02970.ISO from the Volume Licensing website.
  2. Install Windows Server 2019
  3. Install DFS-R using PowerShell: Install-WindowsFeature -Name FS-DFS-Replication
  4. Error will now be present in Event Log: Get-EventLog -LogName System -EntryType Error -Source DCOM
  5. Restarting the computer, or just restarting the DFS-R service, will cause another error to be added to the event log.

Notes:

  • Problem affects Core and Desktop Experience
  • Problem affects Standard and Datacenter
  • As of 2020-Feb-28, installing Windows Updates doesn't make a difference
  • Joining a domain, or not, makes no difference
  • Properly configuring DFS Replication does not make a difference

Request:

This issue is related to another TechNet question, but there's no answer there.

I thinkit's ignorable, but I don't really know.  I would like to "fix" whatever is causing this error.  Or, at least get some authoritative comments that it is safe to ignore, and preferably have someone at Microsoft correct the problem that is leading to an ignorable error.

Any and all help would be appreciated.  Thank you.

-Tony

Can't extend ReFS partition

$
0
0


I'm sorry everybody from or for Microsoft, but I did a bit mistake using ReFS as a test file system for Z: I should have used NTFS, it's very easy on Windows Server 2012 R2 64 bit all updates to resize the partition.

If drive Z: the only partition with ReFS rest with NTFS, if the drive Z: is full I will copy everything on an NTFS Partition on an extra 4 TB drive.

  • Body text cannot contain images or links until we are able to verify your account.

so how can i post pictures? oh i know!

https://www.bildhochladen.de/images/2020/03/29/Unbenannt1.png
https://www.bildhochladen.de/images/2020/03/29/Unbenannt.png

whost've gottennnnnnnnnnnnnnnnn

HCI Problem With StorageSpace-Driver

$
0
0

Hello Together
I'm hoping someone here can help me.
I have a 2 Node HCI 2019.
Since about 1 month I get on one server constantly errors in the StorageSpace-Drive. It looks as if the StorageSpace loses all disks at once. But then in the same second everything is connected again. Then there is a short sync and everything runs again. 
But this happens several times a day. 
The server is up to date and the latest firmware has been installed.
Does anyone have an idea?


Translated with www.DeepL.com/Translator (free version)

 

<style><br _moz_dirty="" /></style>

Problem expand volume S2D

$
0
0

Hello,

I have a problem for expand volume S2D.

PS C:\Windows\system32> Get-StorageTier H1VD01_Capacity | Resize-StorageTier -Size 2.5TB

Resize-StorageTier : Not enough available capacity

Activity ID: {7e94472f-9d42-4dba-b580-a674234b4c78}

At line:1 char:35

+ Get-StorageTier H1VD01_Capacity | Resize-StorageTier -Size 2.5TB

+                                   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

    + CategoryInfo          : NotSpecified: (StorageWMI:ROOT/Microsoft/...SFT_StorageTier) [Resize-StorageTier], CimEx

   ception

ObjectId                         : {1}\\CLUS2D\root/Microsoft/Windows/Storage/Providers_v2\SPACES_StoragePool.ObjectId

                                   ="{f4215b48-2440-4e5c-ac55-4d1b77633fb7}:SP:{37ce80ac-8522-4dd0-970d-0d9200be4ce2}"

PassThroughClass                 :

PassThroughIds                   :

PassThroughNamespace             :

PassThroughServer                :

UniqueId                         : {37ce80ac-8522-4dd0-970d-0d9200be4ce2}

AllocatedSize                    : 20337207017472

ClearOnDeallocate                : False

EnclosureAwareDefault            : False

FaultDomainAwarenessDefault       : StorageScaleUnit

FriendlyName                     : CLUS2D-StoragePool

HealthStatus                      : Healthy

IsClustered                      : True

IsPowerProtected                 : False

IsPrimordial                     : False

IsReadOnly                       : False

LogicalSectorSize                : 4096

MediaTypeDefault                  : Unspecified

Name                             :

OperationalStatus                : OK

OtherOperationalStatusDescription :

OtherUsageDescription            : Reserved for S2D

PhysicalSectorSize               : 4096

ProvisioningTypeDefault          : Fixed

ReadOnlyReason                   : None

RepairPolicy                     : Parallel

ResiliencySettingNameDefault      : Mirror

I have 14 TB of free on storagepool

CRITICAL ERRORS ON PRODUCTION SERVERS "THE PARAMETER IS INCORRECT"

$
0
0

We are facing following errors on two servers when we extend the existing Hard Drives from Computer Management.

1st Server:

Error-01:

The volume MSSQLDATA (D:) was not optimized because an error was encountered: Neither Slab Consolidation nor Slab Analysis will run if slabs are less than 8 MB. (0x8900002D).

Error-02:

The volume MSSQLDATA (D:) was not optimized because an error was encountered: the parameter is incorrect. 0x80070057

2nd Server:

Error-01:

The volume Data (D:) was not optimized because an error was encountered: Neither Slab Consolidation nor Slab Analysis will run if slabs are less than 8 MB. (0x8900002D)

Error-02:

The volume Log (E:) was not optimized because an error was encountered: Neither Slab Consolidation nor Slab Analysis will run if slabs are less than 8 MB. (0x8900002D)

Storage Spaces Direct S2D - Performance Storage Tier not created automatically after enabling S2D

$
0
0

Hello everyone.

I have two nodes configuration for Storage Spaces Direct.

The servers were built following the R730xd Dell Matrix Compatibility with S2D with certified components, firmwares and drivers.

Everything goes fine with its configuration until I enabled S2D and checked the Storage Tiers created:It only created Capacity Storage Tier. My current configuration per server (both identical in every component) is:

- Cache: 4 x SSD Dell Enterprise 1.98 TB

- Storage: 8 x HDD Dell Enterprise 8 TB

The cache:storage ratio is 1:2 - the one suggested by Microsoft in this scenario.

Enabling S2D shows in the final validation report that the SSD's are marked as "Disk Used for Cache memory" with an asterisk. Also after creating the pool all the disk are selected and SSD's are marked as Journal, getting all the free space for themselves, and HDD's are marked as Auto-Select with almost all the free space available.

But after this I have checked the Storage Tiers created and only Capacity is showing. No Performance Storage Tier was created for the cache SSD's. I tried creating it manually but no free space is available for this storage tier when trying to create a new virtual disk.

PS C:\Users\xx> get-storagetier -friendlyname capacity, performance | fl *


Usage                  : Data
ProvisioningType       : Fixed
AllocationUnitSize     : Auto
MediaType              : HDD
FaultDomainAwareness   : StorageScaleUnit
ColumnIsolation        : PhysicalDisk
NumberOfColumns        : Auto
NumberOfGroups         : 1
ParityLayout           :
ObjectId               : {1}\\S2D_CLUSTER\root/Microsoft/Windows/Storage/Providers_v2\SPACES_StorageTier.Object
                         806d0-73d2-4312-9a54-7238280672d4}:ST:{a39a3509-81e6-49f5-81a5-3e3228700390}{3da1103b-
                         4-8dc5-daab62bb5584}"
PassThroughClass       :
PassThroughIds         :
PassThroughNamespace   :
PassThroughServer      :
UniqueId               : {3da1103b-921a-4ef4-8dc5-daab62bb5584}
AllocatedSize          : 0
Description            :
FootprintOnPool        : 0
FriendlyName           : Capacity
Interleave             : 262144
NumberOfDataCopies     : 2
PhysicalDiskRedundancy : 1
ResiliencySettingName  : Mirror
Size                   : 0
PSComputerName         :
CimClass               : ROOT/Microsoft/Windows/Storage:MSFT_StorageTier
CimInstanceProperties  : {ObjectId, PassThroughClass, PassThroughIds, PassThroughNamespace...}
CimSystemProperties    : Microsoft.Management.Infrastructure.CimSystemProperties

Usage                  : Data
ProvisioningType       : Fixed
AllocationUnitSize     : Auto
MediaType              : SSD
FaultDomainAwareness   : StorageScaleUnit
ColumnIsolation        : PhysicalDisk
NumberOfColumns        : Auto
NumberOfGroups         : 1
ParityLayout           :
ObjectId               : {1}\\S2D_CLUSTER\root/Microsoft/Windows/Storage/Providers_v2\SPACES_StorageTier.Object
                         806d0-73d2-4312-9a54-7238280672d4}:ST:{a39a3509-81e6-49f5-81a5-3e3228700390}{c31c5599-
                         6-ae68-dbcb568704df}"
PassThroughClass       :
PassThroughIds         :
PassThroughNamespace   :
PassThroughServer      :
UniqueId               : {c31c5599-f6e4-4b76-ae68-dbcb568704df}
AllocatedSize          : 0
Description            :
FootprintOnPool        : 0
FriendlyName           : Performance
Interleave             : 262144
NumberOfDataCopies     : 2
PhysicalDiskRedundancy : 1
ResiliencySettingName  : Mirror
Size                   : 0
PSComputerName         :
CimClass               : ROOT/Microsoft/Windows/Storage:MSFT_StorageTier
CimInstanceProperties  : {ObjectId, PassThroughClass, PassThroughIds, PassThroughNamespace...}
CimSystemProperties    : Microsoft.Management.Infrastructure.CimSystemProperties

Could anyone help me out with this issue?

Thanks in advance.


Error with DSFR

$
0
0

Hi,

I have installed windows 2008 R2 x64 on two servers and deployed DFS-Replication. Error 5014, 5008,5002 keeps on coming after certain days and replication starts after it. I'm worried why these error are occuring. can anyone help?

EVENT 5014

The DFS Replication service is stopping communication with partner EKTW2K8FSRV2 for replication group Photos due to an error. The service will retry the connection periodically.
Additional Information:

Error: 1723 (The RPC server is too busy to complete this operation.)

Connection ID: 17ED06AD-C3FD-40E1-ABAB-73139A5C0097

Replication Group ID: E980F065-7465-4523-A899-293133BEFDAA

EVENT 5008The DFS Replication service failed to communicate with partner EKTW2K8FSRV2 for replication group Photos. This error can occur if the host is unreachable, or if the DFS Replication service is not running on the server.

Partner DNS Address: EKTW2K8FSRV2.snpl.net.np

 Optional data if available:

Partner WINS Address: EKTW2K8FSRV2

Partner IP Address: 192.168.70.126

 The service will retry the connection periodically.

 Additional Information:

Error: 1722 (The RPC server is unavailable.)

Connection ID: 17ED06AD-C3FD-40E1-ABAB-73139A5C0097

Replication Group ID: E980F065-7465-4523-A899-293133BEFDAA

EVENT 5002

The DFS Replication service encountered an error communicating with partner EKTW2K8FSRV2 for replication group Photos.

Partner DNS address: EKTW2K8FSRV2.snpl.net.np

 Optional data if available:

Partner WINS Address: EKTW2K8FSRV2

Partner IP Address: 192.168.70.126

 The service will retry the connection periodically.

 Additional Information:

Error: 1753 (There are no more endpoints available from the endpoint mapper.)

Connection ID: 17ED06AD-C3FD-40E1-ABAB-73139A5C0097

Replication Group ID: E980F065-7465-4523-A899-293133BEFDAA

EVENT 5004

The DFS Replication service successfully established an inbound connection with partner EKTW2K8FSRV2 for replication group Information.

Additional Information:

Connection Address Used: EKTW2K8FSRV2.snpl.net.np

Connection ID: 455CB401-0DAF-4BA6-882C-8E0206C3A6A9

Replication Group ID: B4BA1C7A-378E-4DE0-8522-CB9BB9E0B192

FRS to DFRS Migration Stuck in 'Eliminating' State

$
0
0

Hello,

I followed this guide to migrate from FRS to DFSR. Everything was going fine until the last step and now one of the DCs is stuck in an eliminating state.

It looks like DFSR is working fine, like maybe it just can't complete the final cleanup of FRS for some reason.

PS C:\Windows\system32> dfsrmig /getmigrationstate

The following Domain Controllers are not in sync with Global state ('Eliminated'):

Domain Controller (Local Migration State) - DC Type
===================================================

PDC ('Eliminating') - Primary DC

Migration has not yet reached a consistent state on all Domain Controllers.
State information might be stale due to AD latency.

I'm not sure how to proceed from here. Any ideas?

DFS Root Namespace Server

$
0
0

Hello,

I am attempting to replace some old servers and when we power off one of our DFS servers we notice that we can no longer access the DFS namespace.  We can not browse it through the UNC path nor can we see it in DFS management.  As soon as we power the server back on DFS resumes operation.

This indicates to me that there must be some kind of master DFS root role that we need to transfer to another server.  Does anyone know where I can find some documentation on that?  When I search all I find is information on a primary dfs replication partner.

Thank you,

Matt

DFS Namespace server I have to named servers one enabled one disabled, now the one that was enabled is down i cant enable the other

$
0
0

DFS Namespace  server I have to named servers one enabled one disabled, now the one that was enabled is down i cant enable the other

1 how ho i enable that other server

2 how do i move "roots" "master" what ever is making the servers in remote office that were created with dfs namespaces first the holder of the keys and move those "keys" to my databases dfs server that was added secondly to the name space

3 doe my having the data center version "disabled" make it so it cannot mange it? If i have it "enabled" would the namespace still be manageable

How do I correctly size a disk drive with considerations for DFS-R and VSS?

$
0
0

I am trying to determine the total disk space needed for a drive that will be used for DFS replication and needs to have VSS enabled. Here are the details of my theoretical scenario:

  • I need a D: drive that will be used for data storage only
  • I will have a single file share on the D: drive that will hold 500 GB of existing files, and I would like to have 100 GB of free space for future files
  • This file share will be a DFS replicated folder and have a staging quota size of 50 GB
  • The D: drive will need VSS enabled, and I would like to keep a month's worth of previous versions (let's say 100 GB is needed for this)

 

Am I correctly in my thinking that the D: drive size, at thebare minimum, will need to be 500 GB (existing files) + 100 GB (free space for future files) + 50 GB (DFSR staging quota) + 100 GB (reserved for VSS) for a total size of 750 GB? Are there any other size considerations I need to account for when determining the total size?

Viewing all 13580 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>