This was a fun little script I threw together after a particular conversation came up at Spiceworks. If you’ve worked with PowerShell long you’ve used Get-Content to read a file. 99% of the time, it’s fine and you just continue on with life. This blog post is about that 1% when Get-Content is SLOW. The .NET IO.Streamreader is where people turn to speed things up so I decided to create a function around it that worked much like Get-Content does. This is it’s story.
By now, you may have noticed I’m always on the lookout for better performing code. This has turned out to be a good habit now that I’m working at athena health, as the pure scale of things is so much larger than places I’ve been at in the past. One piece I’ve never been able to speed up, though, is iterating through folders and files. Nicolas1847, a PowerShell scripter on Spiceworks, has come up with an ingenious method to get simple directory information using Robocopy (of all things), and a colleague at athena health likes to shell out to CMD.exe and use the old DIR command. But are they faster? And if so, which one?
I expect, if you got the right group of people together, you could have a good old Mac vs PC style argument over the use of Powershell Objects (PSObjects) and hashtables. And I’ll be honest, while I’ve used hashtables a lot for splatting I’ve used them very little for anything else. Time to look at the two and figure out which is better, once and for all!
You know I love working with Objects in Powershell, but are some methods better at building them then others? Ran into an interesting technique recently and wanted to test it against my normal way of doing things. Read on to see which technique is faster
An interesting problem came up at Spiceworks the other week, and it was all about deleting empty directories. Locating empty folders and removing them is actually pretty easy but the complication comes when you have nested folders, all of which are empty. The most obvious scripting method doesn’t work in that regard. Let’s see how I accomplished this task.
It’s been a tough couple of weeks, let me tell you! I had a cold, that eventually dropped into my lungs and became pneumonia. I don’t know about you, but I don’t have much interest in doing anything when I’m sick, not even Powershell! Also, at work, we’re ramping up to migrate from our current ERP system to one from SAP and I expect that will be eating up a ton of time too. Not to mention the need to finish my Exchange 2010 migration, create a Sharepoint 2010 test environment from our production one and half a dozen other projects going on at the same time. I hope to keep fitting my scripts in amongst all this, as I have to admit this is where my IT passion is right now.
Was reading another blog from Tome Tanasovski and he was doing a simple series on learning Powershell. One of his suggestions was taking a large dictionary file and finding all of the five letter words and locating the Palindromes. I thought this would be interesting, and another opportunity to look at performance and Powershell.
I read somewhere that Powershell was built for reliability, not for performance and that’s really true. I ran into this a lot on the DFS Monitor project, where running queries against 40,000 records in memory were taking 1.2 seconds or so. But there are a few things we can do to improve performance if needed. The simple fact is most chores you’ll be doing with Powershell will not run against these performance limitations and you’ll be more then happy with your scripts.
But an interesting thread on Spiceworks came up and really gave me the opportunity to test some things, and I thought I’d talk about it here.