The Surly Admin

Father, husband, IT Pro, cancer survivor

DFS Replication Monitoring

For the past two years I’ve been working, on and off, on a way to easily monitor my DFS replication tree. I just want to make sure backlogs aren’t getting stuck and so forth. The Microsoft tools are pretty unwieldy for this, by the way.  This sent me on the journey into DFS and how it works and how I can better monitor it.  Read on to see how it went!

Version 2.6 release: An update to this post, I’ve recently released verison 2.6 of DFS Monitor with History that now saves the data in XML format.  Read more about it here.

My first attempt was the DFS Replication Monitor written in vbScript. You can find it here. It’s a good script and recently updated per some Spiceworks community feedback. But it’s just snapshot view of DFS as it was at the time the script ran. Also it shells out to DFSRDIAG.exe so requires some setup before it will run properly.

Up next was the DFS Monitoring Widget. This was originally going to be a widget you could put on your Spiceworks dashboard, though that functionality never worked too well–chalk that up to my not knowing Javascript in the slightest!–and has since been removed from Spiceworks.  This script was written in PowerShell and it did much the same as the DFS Replication Monitor, but it went a step further and saved the data and built a nice annotated timeline using Google visualizations.  This worked really well, but it had problems too.  Still shelling out to DFSRDIAG.exe and used SQL Server to save the historical data.  Getting that all setup and working properly was a bear!

Time for a better solution, and I think this new script DFS Monitor With History is the answer.  It was an interesting experience getting this script written!  A lot of work, a lot of testing, a lot more work and a lot more testing!  I want to thank my friend ChristopherO for helping me out on the testing–it was his environment that kept breaking my script that was working perfectly!!

I’m calling this script version 2.0, but in reality there were a couple of fully different scripts in between DFS Monitoring Widget and this one, but since they will never see the light of day this one get’s the clean slate that is 2.0!  This new script had to accomplish several goals:

  1. Had to be easier to setup then DFS Monitoring Widget
  2. Eliminate using DFSRDIAG.exe.  I knew this was possible when I ran across this script:  Get-DFSRBacklog.ps1.  To be honest, my first re-write borrowed heavily from this and I still use the general structure but with some pretty heavy re-writing.
  3. Increase the reliability–I’ll get into this later.

Goal #1: Make it easy

As I’ve been using PowerShell more and more it soon became apparent that the original way I was saving data to SQL was perfectly fine, and even has some performance benefits believe it or not, but made setup pretty difficult.  I felt this was not only a barrier to using it but kind of defeats the purpose of a script–which is supposed to be a quick, easy way to get something done.  So saving the data had to be easy, if not transparent to the user.  Then I discovered the Import-CSV command.  With one simple command I could load an entire set of data into an array and manipulate it any way I wanted.  Export-CSV then would save the data back into a Comma Seperated Value’s file.    Goal #1 accomplished–of course, it’s never as easy as all that, but I’ll spare you the details!

On to Goal #2: Begone foul DFSRDIAG.exe

Reading and playing with Get-DFSRBacklog I soon figured out how to get my script to read the backlog count and even get the backlog files directly out of WMI.  Wrote the script out, spent several hours working on a bug that was a PEBKAC and it all looked good.  Getting good data over and over again, I’m pretty happy.  So then I handed it over to ChrisO and bang!  Several things happened, first it was slow slow slow.  Chris has over 25 remote sites and some of them are over very slow links (DSL, baby) and the script was running very slowly.  What’s worse is that several of the sites were failing to communicate at all causing the script to return nothing at all!

How to handle this?  I went back to the drawing board and really studied what I was trying to do, and started seeing some patterns.  Instead of making multiple WMI calls, why not one big one and gather all the data?  This helped a lot, and my sites ran much quicker, but poor ChrisO was still slow and getting WMI failures.  Then ChrisO asked about multi-threading?  He even found a great link on how to do it!  This was very interesting, but would require a massive re-write.  A couple of days later I had a working version of the script that was using multi-threading and what a difference!  My own sites went from taking about 15 minutes to produce the results to less then 5 minutes.  I couldn’t wait to see how ChrisO’s sites did.

Massive improvement for Chris too!  Success!  Time to get a cold drink, put my feet up and bask in my brilliance.  Or maybe not.  The script now ran all of his sites in less then 10 minutes but now when it built the Google visualization it was taking hours!  What?!  I had re-written this part too to be more reliable but it had been working fine on my sites for weeks.  WTF?

So Chris had set his data to save for 14 days, running every hour for about 45 different replication/folder groups and since each of those replication groups/folders has an incoming and outgoing that’s 2 more.  That’s a little over 30,000 records.  To build the Google visualization I have to query those records for every distinct day (so that’s 24 x 14) and every distinct replication/folder group (another 45), so that’s about 15,000 queries.  When I used the PowerShell Measure-Command cmdlet I was finding the query was taking about 1.5 seconds to run, which meant the whole thing was going to take about 6 hours to build the Google visualization!  For a script you want to run hourly, that’s not such a good number.

The fix here turned out to be simple and effective.  Since I was making a query against date and replication group/folder I simply did a query before that on the date only.  Then when I was looking for the folder I only had to query against 45 records.  Suddenly that query was down to something like .0025 seconds.  Whole build time for the Google visualization dropped to under 15 minutes.

Goal #3: reliability. 

One thing I kept running into, but not consistently, was reliability.  I use PING to test if a server is even there before I start trying to make WMI calls against it.  Why not use the cmdlet Test-Connectivity, you ask?  I would prefer that too, but to be honest PING gives me more information!  So I PING.  You probably should too.  But occassionally PING would fail, as networks aren’t 100% reliable–sorry if I burst a bubble there but it’s true.  I decided to not let this bother me too much since this script was meant to run hourly and will pick up any missed data on the next run–and honestly this data isn’t so important that a missed run was that big of a deal.  But I had to modify the script to deal with missed PINGs.

Another fact of life is that WMI calls don’t always work and return a big fat nothing when that happens.  I ended up writing a custom WMI function that if a call fails try 3 times and almost always the next call (or the next) will work.  If it still isn’t working after that report on it and move on.  I ended up re-writing the multi-threading to use this new function too so that was a lot of re-writing and testing!

Next thing that isn’t as reliable as it could be?  Multi-threading.  PowerShell uses something called jobs for multi-threading.  The idea is you use the Start-Job cmdlet to submit a block of script into a background process.  You can use Get-Job to monitor it’s progress and you can use Receive-Job to retrieve any object it might be returning.  As you may know PowerShell is all about objects and just about every cmdlet you run will return some form of object–and if it doesn’t it should!  In general any script you write should also return an object even if it’s just a string say “Completed!”  Of course, this script doesn’t but hey, I did say “in general.”  Back on subject you can then use Remove-Job to clean-up after yourself and remove the job from existence.  Ryan, who I linked to earlier, had a great way of seeing if all of the jobs you have submitted have finished using the Count property and most of the time this works but there are plenty of times that it just doesn’t.  This is very intermittent and getting it to happen in test was just too hard.  What does that mean?  Basically I wrote into the code to recognize that it’s having a problem retrieving data from the background job and try again.  If you still can’t get it after three tries just give up and move on.   This goes back to the idea that since the data isn’t that important on any given run we can live with a zero count every now and then.  Trying to solve this was more work then it was worth!

Working Script

The script has now been working for about 2 weeks at both my sites and ChrisO’s so I feel pretty safe releasing it to the wild.  I figure if it can survive Chris’ environment it can handle most anything–and now I’ve jinxed myself.

If you need a link to download DFS Monitor With History, just click here.

August 3, 2012 - Posted by | PowerShell | , ,

70 Comments »

  1. […] much matter if they’re little projects–like this one–or big ones–like The DFS Monitor With History.  This is why I’m always on the Spiceworks IT Programming forums and PowerGui.org’s […]

    Pingback by Restart-Computers with Firm Confirmation « The Surly Admin | August 28, 2012 | Reply

  2. Not entirely sure as I’m not getting the error, but who knows it might crop up for me too!

    Here’s a line to try to replace line 106:
    { $Data = $Data | Where {$_.ConvertToDateTime($_.RunDate) -ge $Now}

    Let me know if it works?!

    Comment by Martin9700 | September 13, 2012 | Reply

    • Sorry mate, no good, made the log heaps bigger and not get the error:

      DEBUG: Loading data…
      Method invocation failed because [CSV:System.Management.Automation.PSCustomObje
      ct] doesn’t contain a method named ‘ConvertToDateTime’.
      At E:\DFSMonitor\DFSMonitorWithHistory.ps1:107 char:46
      + { $Data = $Data | Where {$_.ConvertToDateTime <<<< ($_.RunDate) -ge $Now}
      + CategoryInfo : InvalidOperation: (ConvertToDateTime:String) [],
      RuntimeException
      + FullyQualifiedErrorId : MethodNotFound

      Cheers

      Comment by Garth | September 13, 2012 | Reply

      • Yeah, that was a half baked attempt at 1:20am 🙂 Let me get into work tomorrow and take a stab at it and see what I come up with. I actually ran into something very similar today working on a different script so I think it’s just a matter of formatting that method properly.

        Comment by Martin9700 | September 13, 2012

  3. Garth! Sorry man I accidentally deleted your first comment!! Who knew a pencil icon in the WordPress app was delete? Especially when there was a trashcan icon 2 over!! Yikes!

    Comment by Martin9700 | September 13, 2012 | Reply

    • Thanks for your time. It is funny yesterday after I put in your quick fix and it didn’t work, I put back the original script and it worked fine on the next run, however every subsequent run has failed with the below error:

      DEBUG: Loading data…
      Cannot convert value “14/09/2012 8:00:01 AM” to type “System.DateTime”. Error:
      “String was not recognized as a valid DateTime.”
      At E:\DFSMonitor\DFSMonitorWithHistory.ps1:106 char:39
      + { $Data = $Data | Where {[DateTime]$_. <<<< RunDate -ge $Now}
      + CategoryInfo : NotSpecified: (:) [], RuntimeException
      + FullyQualifiedErrorId : RuntimeException

      Thanks for your help.

      Comment by Garth | September 13, 2012 | Reply

      • I think I see the problem… there is not month 14! Are the rest of the dates on the chart pretty wonky or are they coming over ok?

        Comment by Martin9700 | September 14, 2012

    • OK, I updated the source code on Spiceworks to save the data in the right format so when it reads it back in it won’t cough up on it. Should work. Next question, can you live with simply deleting all your data and starting over or do we need to convert it? We should be able to to read it in, do some string manipulation and spit it back out but if the data’s not THAT important then why both? Let me know.

      Comment by Martin9700 | September 14, 2012 | Reply

      • Error again, when it runs on the schedule:

        Transcript started, output file is E:\DFSMonitor\debug201209180900.log
        DEBUG: Loading data…
        Cannot convert value “18/09/2012 8:00:03 AM” to type “System.DateTime”. Error:
        “String was not recognized as a valid DateTime.”
        At E:\DFSMonitor\DFSMonitorWithHistory.ps1:105 char:39
        + { $Data = $Data | Where {[DateTime]$_. <<<< RunDate -ge $Now}
        + CategoryInfo : NotSpecified: (:) [], RuntimeException
        + FullyQualifiedErrorId : RuntimeException

        Comment by Garth | September 17, 2012

  4. Martin,

    That fixed the error with running the script, however that broke the HTML generation code. Details have been emailed to you.

    Cheers,

    Garth

    Comment by Garth | September 17, 2012 | Reply

  5. Working with Garth offline I was able to correct the problem and the new code has been posted to Spiceworks! Read what happened tomorrow 🙂

    Comment by Martin9700 | September 27, 2012 | Reply

  6. […] for reliability, not for performance and that’s really true.  I ran into this a lot on the DFS Monitor project, where running queries against 40,000 records in memory were taking 1.2 seconds or so.  […]

    Pingback by Powershell and String Searches « The Surly Admin | October 8, 2012 | Reply

  7. […] Replication of the data is even included (could use some much better tools for monitoring that, but that’s another story).  As an administrator, this feature is fantastic because I can change servers in the background […]

    Pingback by DFS Adventures « The Surly Admin | October 21, 2012 | Reply

  8. […] updated DFS Monitor with History (source code here, blog here and here).  I’m a little bit better at Powershell now–a lot of thanks go to this blog […]

    Pingback by Random Thought Friday « The Surly Admin | October 26, 2012 | Reply

  9. […] far my most popular post is the DFS Replication Monitor With History script, so I like to revisit it every now and then and make sure I’m doing things the best […]

    Pingback by DFS Replication Monitor With History Upgrade 2.6 « The Surly Admin | December 20, 2012 | Reply

  10. […] ConvertTo-HTML doesn’t have anything like that in it.  I solved this problem on my DFS Monitor Report by simply constructing the HTML myself, but honestly that’s a pain and I’d rather avoid […]

    Pingback by How to Create HTML Reports « The Surly Admin | January 21, 2013 | Reply

  11. […] especially in the creating and retrieving of the job.  If you create a lot of jobs–like DFS Monitor with History does–you could be leaving a lot of performance on the […]

    Pingback by Multithreading Powershell Scripts « The Surly Admin | February 11, 2013 | Reply

  12. I notist that the script would not runt scheduled, hade to change the
    5b. Argument: -ExecutionPolicy Bypass -command
    to
    5b. Argument: -ExecutionPolicy Bypass -file

    Comment by Janne | March 14, 2013 | Reply

  13. Hi Martin,
    great script, but i ran in a error that i can´t fix or even name it:

    Exception calling “Substring” with “2” argument(s): “Index and length must refer to a location within the string.
    Parameter name: length”
    At C:\temp\DFSRwithHistory\DFSMonitorWithHistory.ps1:530 char:36
    + { $FileName = $File.Name.Substring <<<< (14,2) + "/" + $File.Name.Substring(16,2) + "/" + $File.Name.Substring
    (10,4) + " " + $File.Name.Substring(18,2) + ":" + $File.Name.Substring(20,2)
    + CategoryInfo : NotSpecified: (:) [], MethodInvocationException
    + FullyQualifiedErrorId : DotNetMethodException

    Do you have an hint for me???

    Grettings,
    Michael

    Comment by Michael | March 20, 2013 | Reply

  14. hi Martin

    this is a gr8 script, but how do I overcome the WMI Error. There is definately nothing wrong with the WMI on any of my 30 Filservers. could you maybe tell me how to fix.

    Regards
    Hein

    Comment by Heinrich van den Heever | May 20, 2013 | Reply

    • Hi Hein, what is the error? You’ll want to look at the log for a detailed message. What version os Windows are your servers?

      Comment by Martin9700 | May 20, 2013 | Reply

      • Attached herwith the log file, also what I am geeting as result. Is toes show the correct reults but still this WMI Error.

        regards

        On Mon, May 20, 2013 at 1:00 PM, The Surly Admin wrote:

        > ** > Martin9700 commented: “Hi Hein, what is the error? You’ll want to look > at the log for a detailed message. What version os Windows are your > servers?”

        Comment by Heinrich van den Heever | May 20, 2013

  15. Hein, no attachment support in comments. You can just email it to me at martin@pughspace.com

    Comment by Martin9700 | May 20, 2013 | Reply

  16. This is exactly the tool that should be included with dfsr! I have set it on a scheduled task and sending the results to the sysadmin monitoring server. Thank you!

    Comment by sadyer | July 25, 2013 | Reply

  17. This is a great solution. However, there are a couple of things that would make it even better for my environment. I have two dfs hub servers, each of which host dfs replicas for 5 or 6 branch offices. The replicated folder for each branch office is called ‘sharedata’ and therefore the legend in the chart shows many, many multi-colored items called sharedata. Makes it hard to distinguish which replication groups or servers we’re looking at.

    The other matter is the details table, which lists the number of backlogged files for each connection (great) along with a list of the top 100 backlogged files (not so great or really that helpful, and it buggers up the table’s readability).

    To end on a good note, your instructions for getting this going on a web server were superb, the script is amazing, and I look forward to any suggestions you may have regarding the two minor issues above. Thank you so much for giving to the community.

    Comment by HeyAdmin | August 8, 2013 | Reply

  18. Hey Hey 🙂
    Just as a matter of best practice you don’t want your Folder names to be the same, and this is exactly why, a lot of opportunity for confusion. That said, I think we can accommodate you, assuming the group names are different.

    On lines 595, 596 and 597 replace $_.RFName with
    ($_.RFGUID.Split(“:”))[1]

    Which should give you the Group name. You can also just use $_.RFGUID which will give you the folder AND group name.

    As for the colors, we’re limited by the Google visualization (you mentioned this in your comment on Spiceworks) and there is no way to set the colors in the legend, it’s all automatic.

    Last, backlogged files we can just not put them in there. Go over to line 543 and change it to:
    html += “

    I think that will cover your needs!

    Comment by Martin9700 | August 9, 2013 | Reply

    • Alright, I modified line 543 as follows:

      { $html += “”

      However, although the “details from last run” does show the correct last run date/time, when i click on it, it takes me to the previous run’s table.

      Nothing in the debug log…

      Comment by HeyAdmin | August 9, 2013 | Reply

      • Disregard this post! It’s working perfectly. Must have been cached IE files. Thanks SO much for your help!

        Comment by HeyAdmin | August 9, 2013

  19. Thanks for the suggestions. I updated 595, 596, and 597 as recommended (updated code below [594-597). Unfortunately the chart doesn’t render…just blank space. I can send the debug log if it will help.

    { $Data | Where {$_.RGGUID -eq $Group.RGGUID} | Select RFName -First 1 | ForEach {
    $html += “data.addColumn(‘number’, ‘” + ($_.RFGUID.Split(“:”))[1] + “‘);`n”
    $html += “data.addColumn(‘string’, ‘” + ($_.RFGUID.Split(“:”))[1] + “Status’);`n”
    $html += “data.addColumn(‘string’, ‘” + ($_.RFGUID.Split(“:”))[1] + “ErrorMsg’);`n”

    Comment by HeyAdmin | August 9, 2013 | Reply

    • No, I see the problem. Using Select and only took RFName, not RFGUID. Try removing RFName from the Select statement so it looks like:

      Select -First 1

      Comment by Martin9700 | August 9, 2013 | Reply

      • Same result. Here is a pertinent snippet from the debug log, which repeats several times in the log…

        At C:\scripts\DFSMonitorWithHistory.ps1:595 char:4
        + $html += “data.addColumn(‘number’, ‘” +
        ($_.RFGUID.Split(“:”))[1] + “‘);`n”
        +
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        + CategoryInfo : InvalidOperation: (:) [], RuntimeException
        + FullyQualifiedErrorId : InvokeMethodOnNull

        You cannot call a method on a null-valued expression.
        At C:\scripts\DFSMonitorWithHistory.ps1:596 char:4
        + $html += “data.addColumn(‘string’, ‘” +
        ($_.RFGUID.Split(“:”))[1] + “Status’) …
        + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        ~~~~~~~~~~~~
        + CategoryInfo : InvalidOperation: (:) [], RuntimeException
        + FullyQualifiedErrorId : InvokeMethodOnNull

        Comment by HeyAdmin | August 9, 2013

      • AHA! Instead of using RFGUID on lines 595, 596, and 597, I used RGGUID. Now the RGNames are displayed on the legend instead. Mission accomplished! Now I’m going to move on to removing backlogged file names.

        Comment by HeyAdmin | August 9, 2013

  20. Enhancement Recommendation: Email alerts based on configurable backlog threshold. Perhaps if a particular backlog remains above a configurable threshold (eg. 500) for a configurable time (eg. 60 minutes) an email notification is sent that includes the URL of the report website.

    Great stuff, Martin. Great stuff!

    Comment by HeyAdmin | August 9, 2013 | Reply

  21. Google visualization suggestion: Everything is working splendidly and this is a great solution. I have a recommendation regarding the Google visualization. In our environment, DFSR backlogs reach into the thousands for short periods of time. So when we look at the default time slice of 5 days, the max scale of the chart is also in the thousands. However if we want to look at a smaller slice using the chart’s zoom feature, the scale is the same even though the maximum data value for the time slice might be 50. Hence, on a chart with a scale in the thousands, lines that represent data in the 40 – 60 range look flat. I’ve added the ‘scaleType’ option to line 626 (not sure my line numbers jive with the original but you’ll find it):

    From:

    $html += “chart.draw(data, {displayAnnotations: false, legendPosition: ‘newRow’});`n”

    To:

    $html += “chart.draw(data, {displayAnnotations: false, legendPosition: ‘newRow’, scaleType: ‘maximized’});`n”

    Comment by HeyAdmin | August 26, 2013 | Reply

  22. Love the script. (Running Server 2012 FYI) Few ideas and one error.

    1. I would love some sort of email alerts that only send based on pre-defined conditions.

    2. If would also be neat if after a backlog of greater than x is reported, go back and check again for just that member/folder combo and see if it is decreasing, and then report back if value if moving, or if backlog is stuck.

    3. I tried to parse through the script and find a way to change this, but it appears that my disabled folders in a RG are reported as WMI errors rather than disabled. I took at look at the code (line 773) but it appears that WMI may be returning and error, and it never gets the the else at 773. Thoughts?

    I would be happy to help troubleshoot or add some email features. Send me a message or reply.

    Thanks for the script!

    Comment by Patrick Johnson | October 9, 2013 | Reply

  23. Hi Martin,
    I have your brilliant script working with 7 servers, but on of the servers show WMI ERROR in the “Files” section all the time. i have set WMI security rights, dcom rights and reset the wmi repository, but nothing helps, I have even disabled the firewall. Can you point me in some direction?
    Regards Martin

    Comment by Martin Lindemann Frederiksen | October 23, 2013 | Reply

  24. Hi, i trry to run this tool on my win8.1 german language machine:
    The servers are 2k8R2 servers

    I get following 2 errors:
    —————————————————– 1 —————————————-
    Der Wert “Montag, 2. Dezember 2013 13:21:34” kann nicht in den Typ “System.DateTime” konvertiert werden. Fehler: “Die Zeichenfolge wurde nicht als gültige
    DateTime erkannt. Ein unbekanntes Wort beginnt bei Index 0.”
    In C:\dfsr\DFSMonitorWithHistory.PS1:168 Zeichen:1
    + [DateTime]$ScriptRunDate = (Get-Date).DateTime
    + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo : MetadataError: (:) [], ArgumentTransformationMetadataException
    + FullyQualifiedErrorId : RuntimeException

    Get-Date : Der Parameter “Date” kann nicht an das Ziel gebunden werden. Ausnahme beim Festlegen von “Date”: “Der Objektverweis wurde nicht auf eine
    Objektinstanz festgelegt.”
    In C:\dfsr\DFSMonitorWithHistory.PS1:169 Zeichen:28
    + $SaveFormatDate = Get-Date $ScriptRunDate -format yyyyMMddHHmm
    + ~~~~~~~~~~~~~~
    + CategoryInfo : WriteError: (:) [Get-Date], ParameterBindingException
    + FullyQualifiedErrorId : ParameterBindingFailed,Microsoft.PowerShell.Commands.GetDateCommand
    —————————————————– 1 —————————————-

    and

    —————————————————– 2 —————————————-
    Ausnahme beim Aufrufen von “Substring” mit 2 Argument(en): “Der Index und die Länge müssen sich auf eine Position in der Zeichenfolge beziehen.
    Parametername: length”
    In C:\dfsr\DFSMonitorWithHistory.PS1:570 Zeichen:4
    + { $FileName = $File.Name.Substring(14,2) + “/” + $File.Name.Substring(16,2) + ” …
    + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo : NotSpecified: (:) [], MethodInvocationException
    + FullyQualifiedErrorId : ArgumentOutOfRangeException
    —————————————————– 2 —————————————-
    The debug file shows also ‘unkown Error’ an each server

    Any Help/Ideas?

    kr
    Mike

    Comment by Mike | December 3, 2013 | Reply

  25. I’m getting the following error:

    You cannot call a method on a null-valued expression.
    At D:\DFSMonitorWithHistory.ps1:268 char:23
    $Line[1].ToUpper <<<< () -eq $Folder.ReplicatedFolderName.ToUpper() -and
    CategoryInfo: InvalidOperation: (ToUpper:String) [], RuntimeException
    FullyQualifiedErrorId : InvokeMethodOnNull

    Any guidance would be greatly appreciated.

    Comment by Hugh O | March 26, 2014 | Reply

  26. Wow that was unusual. I just wrote an extremely long comment but after I
    clicked submit my comment didn’t appear. Grrrr…

    well I’m not writing all that over again. Anyways, just wanted to say fantastic blog!

    Comment by used cheerleading uniforms | June 7, 2014 | Reply

  27. Thanks for sharing your info. I really appreciate your efforts and I am
    waiting for your next write ups thank you once again.

    Comment by Monica | June 11, 2014 | Reply

  28. […] can also use this script in order to keep track of the […]

    Pingback by Keeping an eye on DFS-R | Surviving Within IT | July 23, 2014 | Reply

  29. This is a great tool, I am using it daily as we stand up our DFS environment.
    However, a few days after adding a new share to replication I am getting the following error.
    “Measure-Object : Input object “68320 68320″ is not numeric.” The values for the input object will change. It seems that everything is reporting correctly other than that new replication does not show up on the graph. Thanks!

    Comment by Remington Meeks | February 17, 2015 | Reply

    • I’m also having this problem with the input object is not numeric. The details page of the graph is reporting correctly but the graph has stopped. Thanks!

      Comment by efurlong | March 31, 2015 | Reply

    • I’m having this same problem Measure-Object : Input object “182 182” is not numeric.
      At C:\PS\DfsMonHistory\dfsmonhist.ps1:476 char:103
      + … lder.Folder} | Measure-Object Backlog -sum).Sum

      Comment by MarkoL | December 10, 2015 | Reply

  30. Can’t get the history to work. No errors in powershell or the log file, but the dfsdata.xml file is always overwritten and only ever contains one entry – from the most recent run. If I understand the code correctly the xml should grow with time as it logs the history.

    I’m using powershell 2.0 on server 2008 R2 with australian datetime format (dd/MM/yyyy) Any ideas?

    Comment by Craig | May 15, 2015 | Reply

  31. The script seems only to run on a English system, not on a German one. On a German one the file name for the DFSDetails*.html can not be generated. See ScriptRunDate – around there is the problem.

    Comment by Wolfgang | August 13, 2015 | Reply

    • Hi,

      great script.

      To get it work on a foreign language (here: german) I hat to change three lines.
      regEx-check was on english language an failed in german or other languag. Test-Connection with the “quiet”-Option results in a boolean value
      Also “(Get-Date).DateTime” didn’t work in german

      old: $Result = PING $Server -n 4
      new: $Result = Test-Connection $Server -quiet

      old: If ($Found -eq “Yes”)
      new: If ($Result)

      old: [DateTime]$ScriptRunDate = (Get-Date).DateTime
      new: [DateTime]$ScriptRunDate = (Get-Date)

      Comment by Heinrich | September 19, 2016 | Reply

      • Looks much better :)) Thanks!

        But I still get one error. Here the debug information:

        DEBUG: Receiving job number: 15
        Write-Debug : Das Argument kann nicht an den Parameter “Message” gebunden werden, da es NULL ist.
        In C:\Temp\DFSR\DFSR-Monitor.ps1:479 Zeichen:25
        + Write-Debug $Result.ErrorDetail
        + ~~~~~~~~~~~~~~~~~~~
        + CategoryInfo : InvalidData: (:) [Write-Debug], ParameterBindingValidationException
        + FullyQualifiedErrorId : ParameterArgumentValidationErrorNullNotAllowed,Microsoft.PowerShell.Commands.WriteDebugCommand

        On line 479 the script has as command –> Write-Debug $Result.ErrorDetail

        Comment by Wolfgang | September 20, 2016

  32. I recently figured out how to rename a replication group and replicated folder label using ADSI, which immediately reflects changes in the DFS admin console. However I also discovered (by way of Martin’s solution) that the changes aren’t reflected in the WMI repository until the DFSR service is restarted. See the forum question here:

    https://social.technet.microsoft.com/Forums/windowsserver/en-US/4e434aa6-2c81-40fa-9abe-eb39ad5d4c03/is-it-possible-to-change-the-replicated-folder-name-of-a-replication-group?forum=winserverfiles

    Comment by HeyAdmin | March 30, 2016 | Reply

  33. I am receiving this error when it is processing jobs and totaling the backlog:
    ============================================
    Measure-Object : Input object “1204 1204 1204” is not numeric.
    At D:\DFS-Log\DFSMonitorWithHostiry.PS1:476 char:103
    + … lder.Folder} | Measure-Object Backlog -sum).Sum
    + ~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo : InvalidType: (1204 1204 1204:PSObject) [Measure-
    Object], PSInvalidOperationException
    + FullyQualifiedErrorId : NonNumericInputObject,Microsoft.PowerShell.Comma
    nds.MeasureObjectCommand
    =============================================

    The script used to function properly and has for weeks on my server, I love using this and would like to see its function return.
    Thanks!

    Comment by mrburritoman | March 30, 2016 | Reply

    • Did you get this error resolved? I found this script today and I am having the same issue.

      Comment by Allen Bower | April 19, 2016 | Reply

      • Same problem, solved it by changing line 407.

        BacklogCount = $BacklogConnCount | select-object -first 1

        Comment by R.Vijfschaft | September 27, 2016

      • I’m running a different version here at athena and ran into that same problem. The question–that I don’t have the answer for–is which VersionVector do you use? On mine I got 3 so I did Select -Last 1, but not sure if right. Should run against all of the VV’s?

        Comment by Martin9700 | September 27, 2016

      • I got also 3, you’re probably right selecting the last one (assuming the order in which they are listed is correct). I would not run against al the VV’s.

        Comment by R.Vijfschaft | September 28, 2016

  34. Hello great script and needed after lots of DFSR errors on prod web servers. Runs successfully except this error

    DEBUG: –Creating detailed monitoring page…
    Exception calling “Substring” with “2” argument(s): “Index and length must refer to a location within the string.
    Parameter name: length”
    At C:\DFSReports\DFSMonitorWithHistory.ps1:570 char:4
    + { $FileName = $File.Name.Substring(14,2) + “/” + $File.Name.Substring(16,2) + ” …
    + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo : NotSpecified: (:) [], MethodInvocationException
    + FullyQualifiedErrorId : ArgumentOutOfRangeException

    Any ideas? and any help much appreciated

    Adam

    Comment by adam | April 18, 2016 | Reply

  35. This is pretty sick, wondering though, is it easy enough to break up the reports into rep groups / folders?

    It’s currently just a very large list, would be easier if it was broken up. Also, is it possible to as part of the scheduled task email it ?

    Comment by Shank | October 26, 2016 | Reply

    • Not easily, as that’s not what the script is designed for.

      Comment by Martin9700 | October 26, 2016 | Reply

      • Fair enough, but nice work!

        Comment by Shank | October 26, 2016

      • Another question for you, when i browse to the webpage, it only shows me the results from the last schedule task run, how can i also get the last few runs to show up on the site.

        Is that possible or is there a way to see the older data so i know DFS is replicating?

        Thanks

        Comment by Shank | October 26, 2016

      • Sounds like you might be using an old version, try this: https://github.com/martin9700/New-DFSMonitor

        Comment by Martin9700 | October 27, 2016

      • Thanks for that, seems i was using the old one.

        But now with the new after editing the required fields i get;
        “TerminatingError(Get-CimInstance): “The running command stopped because the preference variable “ErrorActionPreference” or common parameter is set to Stop: The specified class does not exist in the given namespace. “”

        Comment by Shank | October 27, 2016

  36. Hi

    I seem to have another slight issue, on the reports that run, from one particular server to another there is apparently a backlog of 4000+ thumbs.db files.
    However, if i jump onto a server and do a dfsrdiag backlog, it succeeds with no backlog.

    Thoughts?

    I also have file screening and GPO’s set up to prevent thumbs.db from being created. On the DFS rep group, i have added it to the file filter. I also have a similar issue with ._DS_Store but that is between two different servers and only 3 apparently in backlog.

    I have also been on each server and cleared as many thumbs and mac junk files that i could.

    Cheers

    Comment by Shank | October 31, 2016 | Reply

  37. We have numerous Replication Folder Names that overlap. Because of that I can’t differentiate between them on the graph. Anything I can do to get RGNames intead of RFnames?

    Comment by Kevin | November 10, 2017 | Reply

  38. […] DFS Replication Monitoring […]

    Pingback by Adding new node to DFS/DFSR (part 2) - Coders in UA | March 19, 2018 | Reply

  39. I’m back here again, haha, good job on this one, was about to start something similar and thought someone must’ve done this already, dfs is too much of a hassle to deal with.

    Also Appears that line 213 has incorrect xml test value, it’s looking for $data.count where the xml function writes backlogcount instead, failing the check due to not returning any values. This breaks graph/history display functionality as it recreates the xml on every run rather than importing it.

    Comment by Maciej | December 11, 2018 | Reply

    • Also xml data is broken if you run a single folder in dfs. Maybe now it’ll work

      used below to fix, but can be done with an quick if results more than 1 etc.

      Line 471;

      #Now add the new data
      #ForEach ($GroupName in $AllGroupNames.Values){
      #$GroupName = ($Group.Split(“:”))[1]
      $GroupName = $AllGroupNames.Values
      $UniqueReplFolders = $Output | Where {$_.GroupName -eq $GroupName} | Select Folder -Unique
      #ForEach ($Folder in $UniqueReplFolders){
      $Folder = $UniqueReplFolders.Folder
      $BacklogCount = ($Output | Where {$_.GroupName -eq $GroupName -and $_.Folder -eq $Folder.Folder} | Measure-Object Backlog -sum).Sum
      $NewRGName = $Folder.Folder + “:” + $GroupName
      $Data = New-Object PSCustomObject -Property @{
      RFName = $Folder.Folder
      RGGUID = $NewRGName
      BacklogCount = $BacklogCount
      RunDate = $ScriptRunDate
      }
      #}
      #}

      Comment by Maciej | December 11, 2018 | Reply

      • Line 478 seems to be giving out nul results as well, it’s expecting $folder.folder to have data, where it doesnt have that attribute.

        So dropped it to;

        $BacklogCount = ($Output | Where {$_.GroupName -eq $GroupName -and $_.Folder -eq $Folder} | Measure-Object Backlog -sum).Sum

        Aaaaand, lets see if it works now

        Comment by Maciej | December 11, 2018

      • Okay, last update hopefully,

        So finally got graph to work as expected on ps5.1

        my pastebin w/o param section: pastebin /AXbeAGkp

        The crucial section change being below;

        #Now add the new data
        #ForEach ($GroupName in $AllGroupNames.Values){
        #$GroupName = ($Group.Split(“:”))[1]
        $GroupName = $AllGroupNames.Values
        $UniqueReplFolders = $Output | Where {$_.GroupName -eq $GroupName} | Select Folder -Unique
        #ForEach ($Folder in $UniqueReplFolders){
        $Folder = $UniqueReplFolders.Folder
        $BacklogCount = ($Output | Where {$_.GroupName -eq $GroupName -and $_.Folder -eq $Folder} | Measure-Object Backlog -sum).Sum
        $NewRGName = $Folder + “:” + $GroupName
        $NEWXMLDATA = New-Object PSCustomObject -Property @{
        RFName = $Folder
        RGGUID = $NewRGName
        BacklogCount = $BacklogCount
        RunDate = $ScriptRunDate
        }
        $Data = $data,$NEWXMLDATA
        #}
        #}

        Still commenting out instead of if statement, so adjust to yours if you have multiple folders, rename new data to $newxmldata and then then join the new data to existing array with $data = $data,$newxmldata.

        Gonna let it run for a week to get some data, should be good now though.now just need to intergrate the health reports into grid page. and deploy to iis

        Comment by Maciej | December 11, 2018

      • and I’m back already. quickly realised csv import drops data type information.

        Cut few more parts out of html write as well as they aren’t needed and get in the way with a single dfs group.

        new paste bin, now fully functional and graphing data nicely.

        Nm0PaXqC just change your email at the bottom or comment out.

        Comment by Maciej | December 13, 2018


Leave a comment