Category Archives: Powershell

WSUS Automation

So there is a new blog that SCCM admins should take a peek at. Bryan Dam is my hero for having the time to combine most WSUS maintenance into one script. I encourage you to go take a look at his blog and the presentation he did about the script.

Seriously good info. As a thank you to Bryan for saving me time I am going to respond to a statement in the blog post on his script. “Once an update has been declined in WSUS and synced to Configuration Manager I honestly don’t know how you bring it back.  I’m … sure … there’s a way somehow.”

Well as someone forced into aggressive declines to keep the WSUS catalog to a reasonable size I was forced to learn how. Once you know how to restore a declined update you can decline without fear. So how do you do it? Approve the update in WSUS and sync. But you say it is not that simple, “I tried it and it did not show up”. Well when has the WSUS and Configuration Manager interface ever been simple. The trick is that the sync must be a full sync not a delta sync. To trigger a full update run this powershell on your primary site server

One other thing to note when re-approving in WSUS. Unapproved is an approved status for SCCM. Basically everything that is not declined will sync to SCCM. By approving the patch as unapproved you will return it to the normal state that SCCM maintains. If you have any systems that patch via WSUS directly using your Software Update Point then approve as needed as it will not impact SCCM.

Quickly remove mapped connections

Just a quick tip to remove mapped connections with PowerShell and NET USE

To see your connections


Everybody out of the pool, the Application Pool

Hi my name is MrBoDean and I need to confess that I am not running a supported version of SCCM. Yes, I am migrating to Current Branch but the majority of my system are still in SCCM 2012 R2 without SP1. The reason why is quite boring and repeated and many companies I am sure, but takes a while to tell. So for the past year it feels like duct tape and bailing wire are all that is keeping the 2012 environment up while we try to upgrade between a string of crisis. It is a shame when the Premier Support engineers are on a 1st name basis with you.

So Monday night I was called at 2AM because OSD builds were failing. It happens and most times a quick review of the log files points you in the right direction. Not so much this time around. The builds were failing to even start, everyone was failing with the error

I have 4 Management Points they can not all be down. They are are up and responding when I test them with

Ok back the the smsts.log for the client that is failing.  It starts up up fine and even does its initial communication for MPLocation and gets a response

It picks the 1st MP in the list and sends a Client Identity Request. That fails quickly with a timeout error.

While it does retry it only submits the request to one MP. But the retry fails with the same error and the build fails before it even starts. Nothing stands out initially and being a little groggy, I go for the old stand by of turning it off and back on again. MP1 was the once getting the timeouts so it gets the reboot. After the reboot we try and again and get the same error. At this point a couple of hours have passed and the overnight OSD builds are canceled and I grab a quick nap and start again 1st thing in the morning.  Well that was the plan until the day crew starts trying to do OSD build and everything everywhere is failing. So I open a critical case with Microsoft. While waiting for the engineer to call I keep looking at logs try to identify what is going on. I RDP into MP1 to check the iis configuration and notice that the system is slow to launch applications. I take a peek at task manager and see that RPC requests were consuming 75% on the available memory. To reset those connections and to get the system responsive quickly, down it went for another reboot. Once it came back up, I took a chance and tried to start a OSD build. This time it worked. So the good news goes out to the field techs. Now I just need to figure out what happened to explain why. Management always needs to know why and what are you doing to not let it happen again. About this time the Microsoft engineer calls and we lower the case to a normal severity. I capture some logs for him and to his credit he quickly finds that MP1 was experiencing returning a 503.2 iis status when the overnight builds were failing. To reduce the risk of this occurring again we set the connection pool limit to 2000 for the application pool “CCM Server Framework Pool” on the management points. I get the task of monitoring to make sure the issue does not return and we agree to touch base the next day. Well I am curious about what lead to this and how long it has been going on. Going back the the past couple of days I see a clear spike in the 503 errors Monday evening starting with a few thousand and ramping up to over 300,000 by Tuesday morning. While I recommend using log parser to analyze the iis logs if you are just looking for a count of a single status code you can get it with powershell. This will give you the count of the 503 status with a subcode of 2. (Just be sure to update the log file name to the date you are checking.

While I still have not found out why, at least I know what was causing the timeout error. While that knowledge, I finally get some sleep. Surprised that no one called to wake me up because the issue was occurring again, I manage to get into the office early and start looking at the logs again and see another large spike in the 503 errors. I do a quick test to be sure OSD is working and it is. A quick email to the Microsoft engineer and some more log captures leads to an interesting conversation.

We check to make sure that the clients are using all the management points with this sql query

And we see that the clients are using all the management points but MP1 and MP4 have about twice as many clients as the other two management points. next we check the number of web connections bot of these servers have with netstat: in a command prompt

*Just in case you try and run this command in powershell, you will find that the powershell parser will evaluate the quotes and cause the find command to fail. To run the command in powershell escape the quotes.

This showed the MP1 and MP4 were maintaining around 2000 connections each. With a app pool connection limit of 2000 this means that any delay on processing requests  can quickly lead to the application pool being exceeded and lots of 503 errors will result.  So this time the connection pool limit was set to 5000. But a word of caution before you do this in your environment. When a request is waiting in the queue, by default it must complete within 5 minutes or it is kicked out and the request will have to be retried. Be sure that your servers have the CPU and Memory resources to handle additional load that this may cause.

In SCCM 2012 R2 pre SP1 there is no preferred management point. The preferred management points where added in SP1 and improved in Current Branch to be preferred by Boundary Groups. In 2012 your 1st Management point is the preferred MP until the client location process rotates the MP or the client is unable to communicate with a MP for 40 minutes. In this case MP1 is the initial MP for all OSD builds because it is always 1st in a alphabetical sorted list. MP4 is the default MP for the script used for manual client installs. If my migration to Current Branch was done I would be able to assign management point to boundary groups and better balance out the load.  But until then I am tweaking the connection limit on the application pool to keeps things working. Hopefully you are not in the same boat but if you are maybe this can help.


Configuration Manager TP 1707 – Run Scripts

I want to talk a bit about the new Run Script feature that was added in 1706. In Technical Preview 1707 it gained the option to add parameters to a script. This has the potential to huge benefit many users of Config Manager and is a great example of SAAS quickly delivering functionality.

Creating a script if very straight forward for this example it is just a Query of WMI for Win32_ComputerSystem

After the script is created, You must approve it. (There is a hierarchy setting to allow\stop authors to approve their own scripts. This should only be allowed in a test environment. ) After the script has been approved it can be run. To run a script go to a collection with the systems you would like to target. You can run the script against the collection as a whole or individual systems in the collection. (You must show the collection membership to target individual systems. The Run Script  option is not available via the default device view.)

Next select the script to run

To view the results of the script execution you will need to use Script Status in the Monitoring view.

 Any output from the script is stored in Script Output. For a good peak at what is going on behind the scenes check out this great write up from the 1706 TP by Tom Degreef

Now for the new stuff. Parameters!! Create a new script using the same simple wmi query with a parameter.

If you click next you will be able to set the default value for the parameter.

BUG… errr feature alert… If you click next or back without editing the parameter value the edit button is no longer present.

Not to worry you will be able to edit the parameter at run time.

When you run a script with a parameter you get a new dialog that allows you to edit the parameter values.

If you were not able or choose to not set the value when creating the script, click on the parameter name and click edit.  Be sure the parameter name is highlighted or the edit button will not do anything.  I spent a bit thinking  how silly to no be able to edit a parameter more then once. Rechecking my steps proved that was not the case.

Set the parameter value and let the script run.

Hopefully this will get you started with running scripts with parameters

Cleaning Up WSUS based on what you are not deploying in Configuration Manager

Let me start with this statement, I wish I had something other than WSUS stuff to talk about. It has been another long week and more issues related to patching. Even with all the other tips I have shared, we experienced major issues getting patches applied. In case you are not aware the windows update agent can have a memory allocation error . The good news is that is you keep your systems patched there is a hotfix to address the issue on most systems. The bad news is that the patch for the issue was not made available for the Standard Editions of Windows Server 2008 and 2012. If you have these operating systems installed and they are 64 versions; with plenty of memory, you may not see the issue or it may just be transitory and clear up on the next update scan. I am not that lucky and have lots of Windows 2012 Standard servers with 2GB of memory.  The strange part of this is that it seemed like some systems would complete a scan and report a success only to report corruption of the windows update data store. This would cause the next update scan to be a full scan and it would rebuild the local data store and the cycle of issues would start again. The fun part is that when this is occurring if you deploy patches via Configuration Manager the client will fail to identify any patches to apply and report that is compliant for the updates in the deployment. The next successful software update scan would then find the patches missing and the system will return to a non compliance state. (This is justification for external verification of patch installs from what ever product you use to install patches. But that is a story for another day. ) So back to the post from Microsoft on the issue, basically if you can not apply the hot fix you have 2 options.

  1.  Move wuauserv (Windows Update Agent) to its own process. (But on systems with less than 4GB of memory this will not gain you much and can be counter productive and impact applications running on the server. )
  2. Cleanup WSUS

For my issue adding memory to the clients was recommended and the Server team to make the change. But one of the joys of working in a large enterprise is that this will take awhile, (not weeks .. months at least). But in the interim, I need to do everything possible to decline updates in WSUS to reduce the catalog size.  At the start of these steps I had ~6200 un-declined updates in WSUS. The guidance I got from Microsoft was to target between 4000 -5000 updates in the catalog. But the lower the number the better off we would be.

Step one review the products and categories that we sync. This was easy because we already review this routinely. There was not much to change but I did trim a few and could decline a 100 or so updates. Not much everything helps.

Step two review the superseded updates. Due to earlier  patching issues our patching team had requested that we keep superseded updates for 60 days. Now this was before the updates had moved to the cumulative model and at this point ensuring the current security patches were applying was critical. (Thank you wanttocry and notpetya) So I checked to see which updates had been superseded for 30 days. I found ~1300, checking for less then 30 days only found 1 more. Big win there so after declining those the WSUS catalog was down to ~4700 updates. That got us under the upper limit of the suggested target. After triggering scans on the systems having issues and reviewing the status, it did help but not enough to call it significant improvement.

Step three break out the coffee and dig in. Wouldn’t be great to see what patches had not been declined that and  are not deployed in Configuration Manager. Easy enough to see what is not deployed in the console for SCCM but you have to look up the update in WSUS to see if it has been declined. At this point I am on the hook to stay up and monitor the patching installs and help the patching team; there are a couple of hours to kill between the end of the office day and when the bulk of our patch installs occur. So I started poking around to see what I could do to automate the comparison between Configuration Manager and WSUS. Our good friend PowerShell to the rescue. First thing is to get the patches from SCCM.  This

This connects to your server and gets all the patches listed in the console and selects the first one so you can take a look at all the properties. I am excluding a few with identifying information but you will see something similar.

Looks great and lots of thing to use to select patches to check on. However if you use query or filter you will find that a lot of those properties are lazy properties . If you pull all the properties for the 1000s of patches the script will run a looooong time. However if you so a select on the object you will get the value reported from the query and you can select what you want using a where-object in PowerShell.  I decided that the following properties would allow me to evaluate the patches: LocalizedDisplayName, CI_UniqueID, IsDeployed, NumMissing

Now to get patches that are not deployed and are not required

And patches that are not deployed and are required

Using this information you are determine a criteria to select the patches to decline. I  settled on patches that are not required and not deployed and have been available for more then 30 days. You can download the script from


Another ~2500ish declined and now the WSUS catalog is down to ~2200  patches. This did help improve the scans and patch deployments for all but the servers with 2GB of memory. But the patches for them can be delivered via a software distribution package until all the memory upgrades are completed.



Config Migration Tip – Use PowerShell to export and import Security Roles

I have been doing a lot of migration prep work and wanted to share a big time saver for moving security roles. You can use PowerShell to export and import security roles. If you have lots of custom roles this is a huge time saver.

To export all of the custom roles

After you collect all the xml files for the roles and are ready to import them use this


Preping SCCM Boot Disks to use with WDS or 3rd Party PXE

I have been busy after taking an extended vacation and then catching up at work, but a couple of folks on twitter have been sharing about using the SCCM Boot Disks with WDS. This is something that I do and learned from Johan Arwidmark . If you work with Configuration Manager you may have heard of him before. 🙂 Here his current post on the subject and it has a link to Zeng Yinghua (Sandy)‘s post on this using iPXE rather then WDS.

What I have to add is a quick Powershell script to automate prepping the Boot Disk for use outside of a integrated SCCM PXE DP.

Step 1 – Create the Boot disk as a ISO

Step 2 – Extract the ISO contents.

Because I do this for a couple of boot disk in several environments I name the iso and the directory that they are extracted to something that helps keep track of them easily. For example Lab_x64, Lab_x86, Prod_x64, etc ..

The script below will use the folder name that you extract the files to name the wim file as well. While not a big deal if you only do one disk at a time, it helps when processing several at the same time.

Step 3 – Prep the boot.wim file for use via PXE.

This is the script that I use

This will go thought each of the directories in the $sources variable and mount the boot wim with dism. If there is not a \SMS\Data  folder it will create it. Next it copies the contents of the SMS\DATA folder from the source directory extracted from the iso and copies it to the mounted wim file. After that the script does an optional step. For my environment when we use WDS there is need to execute a Pre-Execution step. The files for this are staged in the Tools directory. So the script creates the directory and copies the files. Next it copies the TSconfig.ini for the Pre-Execution step. This is also optional.  The script then unmounts the wim file and commits the changes.

After all of the boot.wim files have been updated the script will copy each to a staging directory and name the files based on the name of the source folder. So Lab_x64.iso was extracted to d:\SCCM_export\Lab_x64 and the boot.wim from that directory is named Lab_x64.wim.

Step 4 – Copy to your WDS server and add to the menu.

Thanks Johan and Sandy for posting and reminding me about this.


Distribution Points not reporting Usage data

I am scratching my head about the what happened on this issue, but let me explain. We had a issue come up in our SCCM RAP. (If you are a Premier Customer, I highly recommend the RAP as a service. Get it, Use it, and Use it often.) There were about 30ish distribution points that were not reporting usage stats for our 2012 R2 Configuration Manager site. A quick check showed that the distribution points where alive and well but that the scheduled task that reports the usage statistics was gone.  If you need a good primer on how to check out a distribution point see this post from Scott’s IT Blog. To resolve this I simply exported the task from a working distribution point and imported on the systems where it was missing. To be honest though it did not get fixed everywhere. A couple of months go by and we rerun the RAP and look at the results. Now there are over 300 distribution points not reporting statistics because of a missing scheduled task. This makes me beg and plead to speed up our upgrade project for Current Branch. But until then I have to keep everything going, so a little powershell to the rescue.

First I choose to export the existing scheduled task from a working server and save it as c:\temp\Content Usage.xml

The Rap web site is great for reporting the issue but not so much for getting the details in a way that is easily usable in a script. So here is a SQL query to identify distribution points not reporting usage date.

This will give you a list of server names that you can save in a file. Now for the powershell to recreate the scheduled task and run it.

Give it a little time and rerun the SQL query to verify that the systems are reporting usage data and are being removed from the report.


Powershell Saturday – Nashville 

MVP Mick Pletcher is organizing a PowerShell Saturday event for Nashville. It is still in the planning stages but I will be there. Once the details are finalized I will share them. Hopefully if you are in the area you can join in on the fun. These events are a great way to learn and interact will MVPs and others passionate about using PowerShell. The planned date is for Saturday June 24, 2017.  I will be speaking on Powershell for SCCM.

Updating SCCM Client Logging Options

If you spend anytime supporting System Center Configuration Manager, you will develop a special love for log files. Often I find that I need to change the logging options on a small group of clients to troubleshoot. It could be because I need a larger file size or need to enable debug logging. While making the changes is fairly painless on one system, doing it on several can drive me to drink several cups of coffee. (To be fair just going to work drives me to drink….coffee.) Here is the script I use to make the changes and then put everything back to normal afterwards. The script is below but I am including the Powershell Gallery and GitHub Links as well.

Powershell Gallery