November 2010 - Posts

If you have a complex computer environment (typical of large companies, but true elsewhere I’m sure), then you probably have multiple Active Directory Organizational Units (OUs). If you have multiple ConfigMgr hierarchies, you may intend that certain clients go into certain OUs, each corresponding to the relevant OU (and then apply GPOs to get the clients into the right hierarchies). Or maybe clients end up in the wrong OUs by accident. In such cases client health problems could boil down to confirming that computers are in the right OU. Or if they’re not in any OU of the intended domain then you have another problem, though with similar effect. Thus checking the OU of a computer, or bunch of computers, can often help to you understand why you’re missing expected clients.

So how do you check the OU of the computer(s)? There’s plenty of ways, including scripts (amongst my favorite), but sometimes a command line solution is the best bet. It’s quick and easy. In that case, you might create a batch file to run the following command, taking the computer name (or computer name pattern, as here) as a parameter.

ldifde -f computers.ldf –s <domain.company.com> -d "dc=domain,dc=company,dc=com" -r "(&(objectCategory=computer)(cn=<computer_name_pattern>*))" -l cn,ou

You won’t need the “-s” parameter if the “-d” domain is the same as the one that the computer you’re running the command on is joined to. The CN can be a specific computer name or a pattern (with “*” for the wildcard), though the command is much faster on a large domain where you know the first part of the computer name at least.

LDIFDE has plenty of articles on the internet so it’s easy to find examples for similar problems, or the details on how to figure it out for yourself. LDIFDE is available on domain controllers, but you can also install an ‘AD lite’ on any Windows Server 2008 R2 server (and others?) by adding the “Active Directory Lightweight Directory Services” role (which doesn’t make it into a domain controller). Or you can grab the relevant files and use them on Windows 7 (I did that long ago, and thus forget the details).

p.s. Sorry to my Facebook friends who would rather not be spammed on such topics. I’m trying to figure out how to disconnect my blog from Facebook (it got linked long ago).

I hope that those of you evaluating/beta testing ConfigMgr 2012 (previously known as v.Next) are checking out the wonderful client health additions. I’ll get into more details on those soon but for now one issue you may encounter is that when you’re looking at the ccmeval.exe results (in the v_CH_EvalResults view) you’ll find that the “Result” column is numeric. That’s fine but what do those numbers mean? There’s no lookup table (at least not yet), so all you can do is guess.

My research suggests the following values, which you can easily add to your queries as I’ve done in a CASE clause. The final terminology will likely be different but you get the idea (I hope).

select healthCheckDescription,

case Result

when 1 then 'TBD'

when 2 then 'n/a' 

when 3 then 'test failed'  -- and thus fix not tried and/or fix not available

when 4 then 'fix failed'  -- and the test must have failed too, in order for the fix to be tried

when 5 then 'n/a - dependent test failed'

when 6 then 'fix worked'  -- so the test must have failed

when 7 then 'all tests passed'

else 'unexpected result'

end 'result', count(distinct netbiosname) 'clients' from v_CH_EvalResults

group by healthCheckDescription, Result

order by count(*) desc

One of my more common needs is to analyze log files (which are really just text files)  for recurring issues. If lots of clients have the issue, or some clients have the issue a lot, then it's worth pursuing (if it's rare then it's just 'one of those things'). So how do we do such analysis? We could spend a lot of time reading such files, or delegate that work to someone, but the more practical solution is to get a computer to do it - it's actually quite easy.

So how do we do that? The following code gives a starting point. Basically you open the file, split it into lines, find the lines you're interested in, and then do something with the parts that are useful. The code doesn't do all of that but it does the core bits. Finding the interesting lines and interesting parts are left to you (think "instr" and "mid" functions especially).

set fso = CreateObject("Scripting.FileSystemObject")
set logfile = fso.opentextfile( filename )
content = logfile.readall
logfile.close

log_lines = split( content, vbCRLF )
for j=0 to ubound(log_lines)
     values = split( log_lines(j), “,” )    ' it's possible your files are not comma delimited...
     subroutine values(0), values(1)   'do something with the data
next

Posted by pthomsen | with no comments
Filed under: , ,

ConfigMgr v.Next has a lot of wonderful improvements, and I look forward to talking about my favorites over time. But often the small ones are very delightful, and I’m pleased to share my thoughts on those as well. One of them is that the ConfigMgr v.Next site settings are entirely stored in the database, and thus can be queried. Historically they’ve been stored in the site control file, and thus required manual or tricky file parsing to read. In ConfigMgr 2007, if not earlier, there was a database representation of those values but that took a lot of parsing so that wasn’t easy either. In v.Next they’re only in the database and are largely already parsed for you.

The following query should make them reasonably easy to read if you’re looking for client-specific settings. There’s about 174 such settings, so that’s a good start. But if you want other settings then you’ll need to do variations on this query to get them (and I hope to cover them in future blog postings).

select ClientComponentName 'Agent', Flags ‘Enabled’, Name 'Property',

case Value1

when 'REG_SZ' then Value2

when 'REG_DWORD' then cast(Value3 as varchar(20))

when '' then cast(Value3 as varchar(20))

else Value1

end 'Value'

from dbo.SC_ClientComponent agents join SC_ClientComponent_Property props on agents.ID=props.ClientComponentID

You’ll see lots of details related to software inventory, hardware inventory, software metering, software updates, etc. Very useful stuff. For example, you can confirm all your sites are consistently configured. You can confirm your predecessor configured things reasonably. Stuff like that.

The trickiest problem you’ll soon notice is that properties like agent schedules are stored in WMI tokens, which mean a lot to WMI but not so much to you and I. I don’t know of a SQL mechanism to translate them, so that’s when I revert to vbscript. The following scriptlet gives you an idea of how to do that. Just substitute the relevant values.

server="<server>"
sitecode=”<sitecode>"
Set loc = CreateObject("WbemScripting.SWbemLocator")
Set WbemServices = loc.ConnectServer(server, "root\sms\site_" & sitecode)

Set clsScheduleMethods = WbemServices.Get("SMS_ScheduleMethods")
 
Interval = "0001200000100018"  'insert your token here

clsScheduleMethods.ReadFromString Interval, avTokens
For each vToken In avTokens
    wscript.echo vToken.GetObjectText_
Next

Posted by pthomsen | with no comments

Some coworkers of mine (Partha Chandran, Chandra Kothandaraman, and Jitendra Kalyankar) recently released a whitepaper on power management which we hope you will find useful. In it they include the following report which I think is especially informative:

image

A bit of trivia is that during the last couple of years one of my biggest projects was a power management solution evaluation. I didn’t contribute to the ConfigMgr R3 power management solution but I did help to look for solutions that would help us to save power dollars and CO2 like any other company. Early on I came up with the idea of graphing the power consumption data over the average day, as shown in this report. I hadn’t seen that done by anyone before, so I may well have ‘invented’ that idea. If so it is one of my favorite contributions to the computer management field.

My coworkers explain some of the benefits of this report in the whitepaper but there’s a few points I think are worth making:

  • if maximizing power savings was your only goal then the ideal scenario would be for the computer and monitor lines to be flat along the X axis – i.e. no power consumption. Of course that’s not true because computers do provide considerable value when used properly so the trick is to find the right curve
  • most computers are not shared amongst shift workers so the computer power consumption should reflect people’s work patterns. If they work 8 hours a day, 5 days per week, then the curve should reflect that if you’re only looking at workdays
  • most computers are used by users, as opposed to being used by ‘service’ programs such as server applications, test software, ‘build’ software or other uses. Therefore when the user is not present the computer should be ‘off’.
  • users almost always use their computers via the monitor, so if the monitor is on then it’s reasonable for the computer to be on. If the monitor is off then there’s rarely reason for the computer to be on.
  • users generally work 9 to 5 (more or less) so late night hours (or weekends) should mean the monitor is off and thus the computer is off.

So in an ideal world:

  • both the monitor consumption and computer consumption would be almost flat along the X axis except from 9AM to 5PM (or whatever hours your workers work, and assuming you’re using local time)
  • any space between the monitor consumption and the computer consumption lines are wasted opportunities for savings (i.e. the user is not using the computer and yet it’s on)
    • exceptions can be made for a middle-of-the-night maintenance window and for an early morning get-the-computer-ready-for-the-user early power-up
    • exceptions can also be made for the fact that complex computers (as opposed to simple devices) do require some time to get everything up to speed and thus it’s reasonable that during work hours when users are often away from their computers for short periods that the computer stay powered up. The lunch hour may be the only reasonable time during which power consumption could commonly go down
  • latency between the monitor line going down and the computer line going down reflects your power management policies. During the work day it’s reasonable to have a large latency because users come and go to meetings, lunch, hallway conversations, etc., but after hours that’s less likely
  • exceptions can also be made for special computers that are used for automated testing, various server functions, remote access by power users, etc., so those two lines in reality won’t quite converge and are very unlikely to ever get quite to the X axis.

From the above report we can conclude:

  • power management did save money in that both lines generally did move closer to the X axis and also moved closer to each other
  • there’s a lot of opportunity for further power savings but we have to remember that this is Microsoft which is by its nature a company of power users who often use their computers more like servers and really do access them remotely after hours. How much more savings can be made is difficult to judge in this case.

Even if you don’t impose power management on your users, I believe that doing such a report on your machines will be quite informative.

p.s. Note that that this post applies to any power management solution that provides data-by-hour details, directly or otherwise.

Posted by pthomsen | with no comments

If you’ve done ConfigMgr queries for awhile you’ve probably done some reports based on operating system. That has meant using the Caption0 column from the v_GS_Operating_System view, but you probably found you had a lot of variations on the various operating system families. For example, Vista Professional, Vista Enterprise, etc. But what if you would rather just categorize at a higher level (by ‘family’) such as XP vs. Vista vs. Win7?

The following query shows how you can do that level of categorization. The trick is to use CASE clauses with LIKE clauses, like so:

declare @grand_total integer

select @grand_total = COUNT(distinct sys.Name0)

from v_R_System sys join v_CH_EvalResults eval on sys.ResourceID=eval.MachineID join v_GS_OPERATING_SYSTEM os on sys.ResourceID=os.ResourceID

where Client0=1 and Obsolete0=0

select @grand_total 'total clients with health results'

select

case when Caption0 like '%Windows 7%' then 'Win7' when Caption0 like '%XP%' then 'XP' when Caption0 like '%Server 2008 R2%' then 'Server 2008 R2' when Caption0 like '%Server% 2008%' then 'Server 2008' when Caption0 like '%Server% 2003%' then 'Server 2003' when Caption0 like '%Vista%' then 'Vista' when Caption0 like '%Hyper-V%' then 'Server 2008 R2' else 'other' end 'OS'

,COUNT(distinct sys.Name0) 'clients with health results'

,COUNT(distinct sys.Name0)*100 / @grand_total '% of all clients'

,SUM(Active0) 'active clients'

,SUM(Active0) * 100 / COUNT(distinct sys.Name0) '% active'

,SUM(case Result when 6 then 0 when 7 then 0 else 1 end) 'unhealthy clients'

,SUM(case Result when 6 then 0 when 7 then 0 else 1 end) * 100.0 / SUM(Active0) '% unhealthy/active'

from v_R_System sys join v_CH_EvalResults eval on sys.ResourceID=eval.MachineID join v_GS_OPERATING_SYSTEM os on sys.ResourceID=os.ResourceID

where Client0=1 and Obsolete0=0

group by

case when Caption0 like '%Windows 7%' then 'Win7' when Caption0 like '%XP%' then 'XP' when Caption0 like '%Server 2008 R2%' then 'Server 2008 R2' when Caption0 like '%Server% 2008%' then 'Server 2008' when Caption0 like '%Server% 2003%' then 'Server 2003' when Caption0 like '%Vista%' then 'Vista' when Caption0 like '%Hyper-V%' then 'Server 2008 R2' else 'other' end

order by COUNT(*) desc

This is actually for ConfigMgr v.Next client health, but the concept is applicable to any such query for SMS or ConfigMgr (SCCM).

p.s. Credit goes to a co-worker, Benjamin Reynolds, for pointing out this option. I’ve done plenty of CASE statements over the years but didn’t realize that LIKE and similar clauses were an option for the sub-clauses.

Posted by pthomsen | with no comments