Management Point Troubleshooting

Just a quick note, If you are looking into issues with a management point responding with a 500 error for policy.

If http://servername/SMS_MP/.SMS_AUT?MPCERT is ok

and http://servername/SMS_MP/.SMS_AUT?MPLIST gives a 500 error.

Check your database it could be down. Not how I wanted to end a Monday but at least in this case it was a SAN network issue impacting the database server. Once that was resolved the SCCM site came right back up.


MVP Days – New Orleans

Come join me at MVP Days New Orleans on May 12, 2017. I have submitted to speak and will hopefully be presenting. But after attending the Orlando MVP day in 2016, I decided to attend every one of these that I can work into my schedule. So regardless of out come of my sessions submission, I will be there. The biggest reason is the time that is set aside for interaction with all of the speakers at the end of the day. This unstructured session allows you to ask questions and network with everyone. That is hands down worth your time. So if you are able register and take part in a great community event.

Distribution Points not reporting Usage data

I am scratching my head about the what happened on this issue, but let me explain. We had a issue come up in our SCCM RAP. (If you are a Premier Customer, I highly recommend the RAP as a service. Get it, Use it, and Use it often.) There were about 30ish distribution points that were not reporting usage stats for our 2012 R2 Configuration Manager site. A quick check showed that the distribution points where alive and well but that the scheduled task that reports the usage statistics was gone.  If you need a good primer on how to check out a distribution point see this post from Scott’s IT Blog. To resolve this I simply exported the task from a working distribution point and imported on the systems where it was missing. To be honest though it did not get fixed everywhere. A couple of months go by and we rerun the RAP and look at the results. Now there are over 300 distribution points not reporting statistics because of a missing scheduled task. This makes me beg and plead to speed up our upgrade project for Current Branch. But until then I have to keep everything going, so a little powershell to the rescue.

First I choose to export the existing scheduled task from a working server and save it as c:\temp\Content Usage.xml

The Rap web site is great for reporting the issue but not so much for getting the details in a way that is easily usable in a script. So here is a SQL query to identify distribution points not reporting usage date.

This will give you a list of server names that you can save in a file. Now for the powershell to recreate the scheduled task and run it.

Give it a little time and rerun the SQL query to verify that the systems are reporting usage data and are being removed from the report.


Powershell Saturday – Nashville 

MVP Mick Pletcher is organizing a PowerShell Saturday event for Nashville. It is still in the planning stages but I will be there. Once the details are finalized I will share them. Hopefully if you are in the area you can join in on the fun. These events are a great way to learn and interact will MVPs and others passionate about using PowerShell. The planned date is for Saturday June 24, 2017.  I will be speaking on Powershell for SCCM.

Updating SCCM Client Logging Options

If you spend anytime supporting System Center Configuration Manager, you will develop a special love for log files. Often I find that I need to change the logging options on a small group of clients to troubleshoot. It could be because I need a larger file size or need to enable debug logging. While making the changes is fairly painless on one system, doing it on several can drive me to drink several cups of coffee. (To be fair just going to work drives me to drink….coffee.) Here is the script I use to make the changes and then put everything back to normal afterwards. The script is below but I am including the Powershell Gallery and GitHub Links as well.

Powershell Gallery






Reporting on the Total Physical Memory installed vs the Memory visible to the OS

Recently I was asked if SCCM could report on the total physical memory installed. No problem there is even a built in report for that, I replied. No the requester explained, on systems with a 32 OS installed those reports only show the max memory the OS can see. So we sit down and go through the Resource Explorer and find that we are collecting the Win32_PhysicalMemory Class from WMI as part of our hardware inventory. A quick little query gets the info for that can be made into a report.


Why you should not like “like”

So for the past two days I have been checking and triple checking the Configuration Manager environment I support at work. Nothing like a hurricane being labeled “worst case” storm to make you shake the dust of the DR plans. After all the backup checks and distribution point health and content validations, I started looking and the performance of various components. Overall nothing major found but while checking the collection evaluations I did find a few collections that stood out for poor performance. Only one was real nasty and over 2 minutes. The collection is not very large in terms of members but the query populating it needed a little work. So before I dig into the details, you may like to know how to identify the issue. You can find all the info you need in the “colleval.log”.  If you use a little googlefu there are some good tips on how to parse the log with powershell and identify your troublemakers. Or you can use the Collection Evaluation Viewer from the System Center 2012 R2 Toolkit. If you have never used it before The Config Ninja has a great post walking you through it and some reports to display the same info.

With the Collection Evaluation Viewer you can use the run time to identify collections that need some review. When you identify a collection to review open the properties and look at the membership rules.  Here is an example of a collection query that was running longer then it should.

In some environments this may complete in just a few seconds but it was taking over 2 minutes for me. On the whole this is a fairly normal requirement for a deployment. All computers in a location with Software X installed. But the database has to get all of the system records in the location and then check all of the product names it has reported and check to see if the name is “Like” the value in the query. Now there is nothing wrong with Like and there are lots of cases where you must use it. But you have to understand that it is a more expensive cost to SQL queries using them. So to check out what the possible returns were and what the wild cards were allowing the query to collect I queried the view in sql.

And the query returned a single product name of  SAP GUI for Windows

So to solve this query’s performance issue, I just switched to = and the collection evaluation run time went down to 3.5 seconds

Now that the longest running evaluation was resolved there where a couple of collections that where taking 15 – 20 seconds to complete. Not terrible but not good either, as I looked through them I found a couple of things to share. First up is another collection for computers with a specific type of software installed.

So I go to SQL and check how many Display names are returned by the wild card query and get back two. So this time changing to a “in list” query reduced collection evaluation time.

One thing to make note of is you need to consider is if the values being returned are going to change often and will you know about the changes. But in general I would use the original query with an ad-hoc query or a report. The explicit values for the collection membership query are appropriate because of the impact to the collection evaluation process.

For the last example I am going to use a query that need to use like.  This query is evaluating the computer name and the author needed to include systems with a specific range of ending values and a specific character starting the computer name. Along with a few exclusions.

Right away we know the original author did not understand WQL operators.  By using the correct operators when you must use like, the query is simplified and performs much better.

Evaluating for single characters with and underscore “_” is quicker then using the percent “%” for any and all character combinations. If you need to query for a specific number of any characters use multiple underscores. Specifying the range allows the query to be much shorter and simpler.


Troubleshooting Config Manager Content Distribution with SQL

I have been spending a good portion of my time troubleshooting Content Distribution issue for my distribution points lately and wanted to share my process. The distmgr.log is the starting point for identifying issues. I deal with a large number of distribution points with very active distributions and deployments and sometimes it is helpful to have a way to cut through the noise and focus on distributions that are having issues. I use the following SQL query to start that process

For this query a state of 0 indicates the content distribution to that distribution point was successful.  But just because the state is not 0 does not indicate that there is a issue. This query just gives an indication of what is in progress and should be active in the distribution logs. I created a report that will give me a summary of the distributions in progress and how many distribution points are remaining.


The report shows a normal day for me. If I refresh the report  after a few minutes and the overall count goes down for each package then everything is progressing and I will have a good day. But if the counts do not go down I have some checking to do. I can check the Content Status for packages that I need more details on or run the SQL Query to get more detailed information. From there the details drive the next steps. Re-sending the  content, removing and re-sending, adding disk space, resolving network issues, etc.

Happy Troubleshooting

There are no task sequences available for this computer

Occasionally I will see the error message “There are no task sequences available for this computer” in my work Config Manager environment. If after checking the simple things like ensuring the  Task sequence is deployed to a collection and deployed correctly. I am forces to move on to the hard stuff like reading the log files. I had a case like this that was a bit of a stumbling block for a while, but thanks to the fine folks a Microsoft Premier Support we were able to resolve everything.

Some background  info: I have a large test environment that we use to test all installs and OSD Builds before moving them to our production environment. Last Thursday I get a report from a team working on a new OSD task sequence that they stopped seeing all of their task sequences in the selection menu. Then on last Friday another team reported the same issue. So I had them collect smsts.log files and send me the info.  So just for fun our patching team started prepping for the release of the next round of patches and started cleanup and creating new software deployment packages. This along with normal deployment activity ran our secondary site out of space twice. So fast forward to this Thursday and we have managed to get everything else working but no OSD builds will start. They are all failing with the error “No assigned task sequence”. When reviewing the smsts.log the error occurs right after making the request for policy assignments

I have seen similar behavior with corrupt policy and checked for that with this SQL query

If you get any rows returned you can remove the bad policy records with this SQL statement

But in this case there were no rows returned so in lieu pulling the remaining two strands of hair off my head , A case was opened with premier support. After several hours of checking and rechecking settings and trying various things we where still in the same boat, then the support engineer said try this SQL query

While checking the return for the row with our machineID, we noticed that the ArchitectureKey value was not set correctly. In this case the ArchitectureKey  value was the negative value of the machineID. Because we were working with the unknown computer records we set ArchitectureKey  for those entries to 2. If you are working with an existing client record the correct value would be 5.  After correcting those values the OSD build began to pull policy assignments normally.

*** 10/4/16 Updated SQL statements to only Identify unknown computer records. Deletes of regular computer records will not cause the issue and the fix will partially restore the deleted object.

If you are in the same boat use this SQL query to identify the issue

And this SQL Statement to correct the issue

We are still doing some post issue research for root cause but it appears that a admin user deleted all members of a collection rather then removing the membership rule on the collection. This appeared to cause the  negative values and opened the door to OSD Hell week.