Hello
I need to know if is it safe to extend patition on Windows (file) server 2012 R2. I have a disk in virtual server and I need to extend it. Do I risk lost of data when I resize deduplicated partition?
thank you
Hello
I need to know if is it safe to extend patition on Windows (file) server 2012 R2. I have a disk in virtual server and I need to extend it. Do I risk lost of data when I resize deduplicated partition?
thank you
Dear All,
While running the DDEVAL tool, it gives a result with "no compression". I want some help on what to derive from values shown against "no compression".
There comes different types of understanding from different forums about this term, which is not concluding to one answer.
Can somebody provide actual understanding of this term while reported by DDEVAL tool?
Thanks,
Amit Jogi
I' try to follow audit file server "https://blogs.technet.microsoft.com/mspfe/2013/08/26/auditing-file-access-on-file-servers/" and "https://mizitechinfo.wordpress.com/2014/07/01/folder-auditing-in-windows-server-2012-r2/"
but on my DC and File server is not work fine. Please help to advise me.
Image may be NSFW.
Clik here to view.
I'm running windows server 2012 deployed with active directory authentication. Very simplistic:
1) Shared volume, ( Accounting ), local users mount \\ip\share\ using auth via AD
Does anyone have any experience with utilizing a mapped/mounted/network drive ( meaning the AD server is actually the client mounting a \\ip\\share from another server via SMB/NFS/CIFS ) within the AD deployment? Like from a ubuntu SMB server on the local network.
Thank you
Dear all,
When I try to search for a document content on an indexed location it will not display any results, but you can clearly see that this text is existing in the word document. See screenshot. This is not for all but for a lot of "in document text searches"
The server where the files are stored is a Server 2016.
The problem exists when I search the local drive directly on the server, and when I search from a windows 10 clients on the mapped network drive
- Restart of service did not fix the issue
- Troubleshooting could not identify the problem
- Rebulding the search index on the server did not solve the problem
- Regedit "Setup Completed Successfully" set to 0, reset the index and did not solve the problem
Do you have any idea how to fix this?
I would like to provide a screenshot, but I'm not allowed to.
Kind regards
Matthias
Hello,
Since a couple of months we have a problem with deduplication on our live production.
We have two servers (FS01 and FS02). We use DFS to replicate files between the servers.
One server located at the office and another server located on the seccond office.
We gott a backlog of DFS from 1 000 000 + so we decided to move the server on the same network to sync the files.
When we start garbage collection on FS02 it works well and its finished after 4 - 6 hours.
When we start garbage collection manual on FS01 with high priority the job start.
Now after 3 days when i check the status its still hangs on 0%.
How we can solve this problem ?
We already fully update the server and restarted before we start the garbage collection.
If you need any more information let me know.
Hi Team,
SMBv3.0.2 signing request is failing on EMC VMAX ENAS Storage which is joined Active directory(Windows server 2012 R2).
Kindly share if any of you have idea on this.
Regards,
Sanesh
Hello - sorry if this is the wrong forum, but since it deals with printers shared over SMB, I thought it might fit in the "file services" section.
We have 500 users using \\print2008 to print right now, so I can't just stand up the new machine, \\print2016 and immediately change our DNS server to have a CNAME pointing print2008 to the new print2016 machine until I'm sure everything works.
So I thought that perhaps I could use the hosts file on my office workstation to do a simple alias.
I stick:
<ip address> print2008
<ip address>print2008.fqdn
into the hosts file and just hope it will work.
However, sadly, none of my printers previously mapped to print2008 are working on my workstation post reboot. Whenever I try to print, I get a GUI-based "access is denied error" and in Event Viewer, I get the following:
The Kerberos client received a KRB_AP_ERR_MODIFIED error from the server print2016$. The target name used was host/print2008. This indicates that the target server failed to decrypt the ticket provided by the client. This can occur when the target server principal name (SPN) is registered on an account other than the account the target service is using. Ensure that the target SPN is only registered on the account used by the server. This error can also happen if the target service account password is different than what is configured on the Kerberos Key Distribution Center for that target service. Ensure that the service on the server and the KDC are both configured to use the same password.
So, I'm guessing that for security's sake, print2016 is refusing to handle my requests because my workstation is offering them to print2008. I followed what instructions I was able to from here, like turning off StrictNameChecks and turning on DNSonWire. I also added the BackConnectionHostNames entry. Basically everything that didn't give me an error message, but I couldn't do the dreaded SPN record for kerberos authentication.... because of course, I get the "duplicate SPN found, aborting operation!" error.
Of course there's a duplicate SPN; print2008 is being used by 500 users!
If I was 100% sure that everything was working great on print2016 I'd go ahead and just make the changes in DNS, like adding the CNAME record that the link in the previous paragraph describes, but I can't riskanything happening to print2008 when it's in active use.
Can anyone recommend a good way to set up my testing environment so I can basically fool my workstation into being okay with sending "bad" kerberos tickets to print2016, and have print2016 be okay with accepting them?
Thanks.
We are experiencing issues with server access that seems to be related to Shadow Copy. This is being seen multiple servers with anywhere from 20 to 35TB of storage at least 10TB of data on the data drive. I have been able to reproduce what I am seeing on Server 2012, Server 2016 and Server 2008.
Here is a description of what I am seeing. We have a Shadow Copy run at noon. A little after 12:05 we see access to the server appear to freeze. This goes on for about 3 to 5 minutes. During this time copy jobs, processes being run on the server and even trying to delete small files will halt/pause. Sometimes processes will crash, and we are seeing copy jobs fail as well. We will often run processes that take days to complete and copy jobs are often of multiple terabytes. It looks to me that the actual Shadow Copy job has completed when we see the freezing. That is my assumption because when I look at the Shadow Copy job in Computer Management it shows a time of 10:05 for the current job which I am assuming is the time the Shadow Copy itself completed.
In reading about Shadow Copy it does say that during the process it will freeze files in order ready the file system, but if I understand correctly that should happen at the beginning of the process and should only last few seconds and should not be allowed to last longer the 60 seconds.
During the Shadow Copy process the write I/O request remain frozen but should the process should only last 10 seconds
So I am wondering if there is an issue with the way Shadow copy is completing its task or if maybe this is to be expected due to the quantity of data on the servers.
Hi
We have Windows 2008 domain. Our file server is NetApp.
When a user tried to open a file on the file server, the user got the message something like "The file is opened by user A. You can only Read the file, you can not modify the file."
But in the file open files window on the file server, it showed user B who was opening the file. Actually user A's account has been disabled. User B is right person who was opening the file at that moment.
How come the user got the incorrect information about who was opening the file?
Please help!
Thanks in advance!
Grace
On Windows Server 2012 R2 (and reproduced on 2016), we see significantly faster performance from our SANs (both Compellent and SolidFire) when accessing files through a drive letter. For example:
Versus:
Why would assigning a drive letter make file performance that much faster? We'd actually prefer NOT to have to use a drive letter, but we can't argue against the performance gains.
Hi Guys
I have two file servers which are both running Windows Server 2008 R2 Standard. Currently, all Windows 7 client PCs can connect to both servers to access shared files & folders. So now the problem is; All Windows 10 client PCs are able to connect to only one of the two servers, i.e. File Server 1 but they can not connect to File Server 2.
The error message all Windows 10 client PCs receive when trying to access File Server 2 is;
"\\File-Server2 is not accessible. You might not have permission to use this network resource. Contact the administrator of this server to find out if you have access permission.
The trust relationship between this workstation and the primary domain failed"
I have tried to re-join the Win 10 PCs to the domain but it didn't work. Why is this file server only refusing connections from Win 10 clients? I need Win 10 clients to also access the shared folders on this Server. Can anyone please assist us? Much appreciated.
Hi,
Testing out a new backup repository.
I have a QNAP nas with 12disks. It's currently connected to my aruba switch via 2x1Gbit links. My backup server is physical with 4x10k raid5, Win2012r2. The backup server also is also connected to the same switch with 2x1Gb links.
QNAP is a iscsi target, and MPIO has been enabled on the backup server.
When I copy a 50GB test file from backup to QNAP, the progress bar shows the speed of 350MB/s, which is impossible. After progress bar shows that the file is copied (after ~3mins), a good portion of the file is still in the memory of the backup server and both two nics are still transferring ~1Gbps for about two minutes after the file was supposed to be copied over. You can see the memory consumption first rising by 20GB, and then slowly lowering back to normal. In the receiving end the QNAP shows transfer speed of 200MB/s during whole time.
Is there a some kind of built-in caching in MPIO? Is there way to disable, or significantly lower the amount of cache?
Awhile back I removed the Folder Redirection GPO because of numerous issues with offline files. Recently I noticed the User Data Folder still had one users data in it. And I found that his outlook archive located here Data\Username\documents\.sync\Archive\Outlook Files was full of incremented archive pst files totaling 375GB which is syncing multiple times a day. Any idea whats going on and how to correct it?
Also is there away to backup users my documents to the server without folder redirection. Maybe something that uses shadow copies stored on the server.
Image may be NSFW.
Clik here to view.