Your company's ad could live here and reach over 50,000 people a month!

Share This Post

Active Directory / Microsoft Office

Can SCCM Configuration Baselines Replace GPOs?

Upfront I’ll state that Active Directory (AD) Group Policy Objects (GPO) are great for what they were originally designed to do: manage foundational settings on a large number of users/computers. Since their original release, there have been some great advancements, such as Group Policy Preferences (GPP), WMI Filtering, and the ability to easily extend with ADMX files. There are also some features that were introduced, and remain today, that should have gone by the wayside a long time ago: logon/logoff scripts, MSI deployments, automatic home drive mapping, to name a few. Where many companies run into issues is around the over-use/incorrect use of GPOs. Behaviors such as creating a new GPO to manage a single application’s settings, creation of Organization Units (OU) specifically to host the User/Computer objects that a GPO is linked to, creation of AD Security Groups (ADSG) to filter which objects a GPO applies against, and the practice GPO blockage on some OU branches to override up-stream GPOs – all combines to an environment that is difficult to manage, plagued with client-side issues that are difficult to track-down, and inconsistent user experiences.

For over a decade now, there has been a shifting away from one-to-one associations for digital objects. For example, documents, pictures, music, video, and emails are not organized, searched upon, or reported by, the folder they reside within but rather by meta-data tags embedded within the file object. After all, the picture of your child surfing during your last vacation to Hawaii isn’t well categorized if you put it in a single folder called “Kids”, “Surfing”, or “Vacations”. But if tagged with like-named meta-data, the single picture is subsequently associated with all three instead of just one and thus easier to find.

A similar movement was started when GPOs introduced WMI Filtering. With that, instead of linking a single GPO to multiple OUs, or worse, lumping all your User/Computer objects into an OU specifically created for your GPO, you could instead link your single GPO to a high-level up-stream location and use meta-data on the local computer (or User AD object) to determine if the GPO should apply or not. For instance, if you have managed workstation builds you need to target, you can use custom WMI classes on those units and GPO target with a WMI Filter so the Computer objects can reside anywhere in your AD. WMI Filtering did suffer a hit to it’s reputation when administrators started using WMI queries that significantly taxed the local system (such as file system queries, Windows Installer queries, etc.), and to this day I run into administrators that are admittedly against WMI Filter use, although they make heavy use of the GPP conditions. It all boils down to using the right tool for the right job.

Assumptions

I make the assumption in this article that you are generally familiar with GPOs, GPPs, and ConfigMgr 2012 or greater Configuration Items (CI)and Configuration Baselines (CB). You don’t need to be advanced in any of these.

Scenario

Consider this scenario. You are making use of the Microsoft Office Telemetry Dashboard, which includes a client-side Agent (Agent) and centralized Database (TDB) to manage your Microsoft Office (MSO) environment. Client-side, the Agent must be installed and configured before it can start collecting information. Your environment is mixed, in that you have MSO versions deployed that include v2007, v2010, v2013, and v2016. As part of the Agent configuration, Management wants additional, custom information associated with the Agent’s report that includes the Department the specific computer is associated with. You, as a Configuration Manager (ConfigMgr) administrator, want a more precise “primary key” that can link the TDB information to your ConfigMgr information (the TDB only reports back the Netbios name of the computer; the Serial Number is decided to be a better identifier of the computer). This leaves us with the following client-side configuration components:

Agent Deployment
•  MSO versions below v2013 only – it is installed by default on 2013 and above
•  Uses two local Scheduled Tasks to kick-off and maintain the Agent state

Static Local Machine Policy Settings
•  Microsoft provides ADMX files for the limited subset of information that can be configured per-machine
•  Information stored in the protected HKLM\SOFTWARE\Policies key

Static Current User Policy Settings
•  Microsoft provides ADMX files to the full configuration of per-user information
•  Information is stored in the protected HKCU\Software\Policies key
•  Up to 4 static-valued “Tags” can be configured via the GPO

Dynamic Current User Policy Settings
•  The specific User or Computer Department identification
•  The local computer’s Serial Number

Pros & Cons for Each Requirement

Each of these actions have a variety of potential solutions, so lets take a first pass using what is recommended by Microsoft, and the tools available in our environment.

Agent Deployment
•  Microsoft provides this conveniently as a Windows Installer (.MSI) single-file installer, and provides it in two flavors, 32 and 64 bit. It therefore has a unique GUID associated with the installation. The Agent is already installed/present on MSO installations of v2013 and v2016, so you’re only needing to deploy it to down-level versions.

GPO
•  If you already have ConfigMgr in your environment, you’ll obviously use that. But if you were to deploy via GPO, in order to correctly target only down-level MSO versions, and isolate between 32 and 64 bit would require either multiple GPO objects and a lot of isolation and filtering to target the correct client, or you go the route of creating a logon script that evaluates the environment and installs based on what it finds.

ConfigMgr
•  Making use of an Application Object, Deployment Types, and Detection Methods to configure the installation, and Asset Inventory to create specific Device Collections, it is simple therefore to deploy the Agent against the target devices, and have ConfigMgr do the logic of installing when needed (down-level MSO versions that do not have the Agent already installed), identifying when the Agent is already there (via the MSI GUID, or just the presence of the Agent’s EXE), and deploy the correct version (Deployment Types).

Winner: ConfigMgr

Static Local Machine Policy Settings

Although Microsoft provides an ADMX file for their settings, they bizarrely document that a CB is a good choice for the per-machine settings. The settings are identical for any MSO version, thus targeting can be somewhat relaxed.

GPO
• The configuration is pure registry settings, with a target of the Local Machine hive and the protected Software\Policy key. GPOs are completely designed update this area of the registry, and in a non-permanent way in that once a GPO is removed or set to Not Configured, entries in that part of the registry are removed. As settings in this area of the registry would require local administrative rights for the user context to update, GPO is a logical and proven method for use for these settings.

• Like any GPO, targeting is difficult and broad, resulting in evaluation and application on a percentage of computers that don’t need it.

ConfigMgr
• Configuration Items default to registry settings, and as they execute with elevated permissions within the system context, they are well suited to this task.

• Targeting is easy, resulting in evaluation and application only on computers where you need it applied.

Winner: Tie

Static Current User Policy Settings

Winner: GPO

I’ll not do a comparison – ConfigMgr CI/CBs have the ability to write to the HKCU registry key, but only in the current user context. Although this works for some types of registry manipulations, the Software\Policy key is a protected location and thus not accessible in the standard user context for anything but read access. GPOs however, can execute elevated in the user context and thus are the clear choice.

Re-Evaluation

I’ve skipped the Dynamic Current User Policy Settings from the previous section, because in a nutshell, neither technology natively handles dynamic data, and thus both technically lose. Without this component, it would be easy to conclude that with the exception of application deployment, there is no clear advantage to using ConfigMgr for settings management over GPOs. GPO targeting may be more difficult and broad than desired, but as the updates are written to the Software\Policies key, there is little harm in applying them to computers without the software.

And for our given scenario, this is thus far true. But lets return to the need for writing dynamic data, aka data that would be different for each computer where the information would be written to. For our example, it is clear that some form of discovery needs to happen in order to get the Serial Number and Department Code into the registry so that the Agent can include it as the additional per-User Tag information sent up to the server. Dynamic information implies discovery, discovery implies some form of “moving part” to retrieve the information, and “moving part” implies an engine to execute instructions. In the computer world that means a script, be it VBScript, PowerShell, or if you like to
punish yourself, Microsoft’s stand-alone JScript.GPOs cant utilize scripts to configure the environment – but CIs can.

Configuration Items and Scripting

CIs already have a tactical advantage over GPOs in that although GPOs have the native ability to install software, write to HKLM and HKCU, map drives, map printers, and run blindly run scripts, CIs can do all that but with active monitoring and reporting, plus query/manipulate the local file system, IIS metabase, make lookup calls to AD, WMI, and XML files, and assembly files as well. But as we’re evaluating if CIs can replace GPOs, and our scenario is looking at dynamic data, I’ll focus on the CI’s scripting abilities.

There are three levels of scripting that a CI can execute, each one being monitored for feedback so that your script can tell the CI what is going on. Those three areas are: Detection (should this CI be applied against the user/device), Discovery (is the current environment configured as expected), and Remediation (bring the current environment into compliance). CIs use the three feedback mechanisms built into any of the supported languages to determine state: Exit Code, Standard Output, Error Output. The general output matrix takes a bit to get used to, and you can find more complex tables out on the Internet, but simplified it looks like this:

Exit Code Standard Output Error Output CI Understanding
Zero (0) Null Null State = No
Zero (0) Null Not Null State = Error
Zero (0) Not Null Either State = Yes
Not Zero Either Either State = Error

All that the CI script engine is doing is executing your script, capturing any output from your script, and monitoring the script engine’s exit code. As the CI doesn’t really care what the output is, here is an easy PowerShell function you can call in your script to correctly set the CI expected output in any of the three execution states:

function ExitScript {
Param(
[Parameter(Mandatory=$True)]
[ValidateSet(“Present”,”NotPresent”, “Failure”)]
[string]$ResultType
)

[int]$ExitCode = 0
[string]$stdOut = “”

Switch ($ResultType) {
“Present”    { $StdOut = “YES” }
“NotPresent” { $StdOut = “NO” }
“Failure”    {
$ExitCode = 1
$StdOut = “ERROR”
}
}

If ($stdOut.Length -gt 0) { Write-Host $stdOut }

$Host.SetShouldExit($ExitCode)
Exit $ExitCode
}

Lets return to our dynamic data problem, which actually has two problems for us. The first is that the information has to be discovered client-side, the second is that it has to write to a protected registry location within the HKCU registry hive. We know that GPOs aren’t designed for dynamic data, and CIs aren’t designed for elevated user-context actions. Here is where you can “think outside of the box” to accomplish the task.  CIs run elevated within the System Context when executing per-device instead of per-user. That means we can write to protected locations. Windows also exposes all user profiles on the device via the registry by way of the HKEY_USERS hive. As such, CIs have the ability to write to protected locations within the “current user” context, without having to execute from within that user’s context. Lets demonstrate:

First, you’ll need to access the system context. This is easily done by using the Windows System Internals tool, PSEXEC. From an elevated command prompt, simply enter:

PSEXEC.EXE -s -i CMD.EXE

From there, enter into PowerShell either via POWERSHELL.EXE or POWERSHELL_ISE.EXE.

Although you’re now in the system context, and the “current user” environment isn’t accessible, you can still determine who the current user is by running the below script. Quick note: this only works if the user is logged on locally. If the user has RDPed in, then there is no UserName populated within the WMI class.

[string]$UserName = (Get-WmiObject -Class Win32_ComputerSystem -Property UserName).UserName
$objUser = New-Object System.Security.Principal.NTAccount($UserName)
$CUSID = ($objUser.Translate([System.Security.Principal.SecurityIdentifier])).Value

To then access the current user’s registry hive, you only need to access HKEY_USERS\[SID]. There is, however, a caveat if you’re using PowerShell in that although PowerShell understands HKLM and HKCU as registry hives, it doesn’t immediately understand HKU, and thus you need to define it for PowerShell first:

New-PSDrive -PSProvider “Registry” -Name “HKU” -Root “HKEY_USERS”

With the registry hive now understood by PowerShell, you can read and write to HKU like any other registry location, which therefore give you elevated access to the current user’s privileged registry locations from within the system context, which eliminates the need to use GPOs to gain said access.

A second benefit to this style of access is that you can also access all user profiles on the device at once, instead of needing to wait for each user to logon first:

$RE = “^HKEY_USERS\\S-1-5-21-[0-9]{10}-[0-9]{10}-[0-9]{9}-[0-9]{6}$”
$Users = Get-ChildItem -Path “HKU:\” | Where-Object { $_.Name -match $RE }
ForEach ($User IN $Users) { Write-Host $User.Name }

To answer our scenario question then surrounding dynamic data, our CI would be configured with the following flow:

1) A CI is created, and configured to use a Detection script to identify if the Agent is present on the local device (MSOIA.EXE)

2) A Settings object is created, using a Script as the Setting Type

2a) The Discovery Script first identifies what our dynamic data is; for this scenario it queries WMI for the Serial Number and whatever system identifies what Department the device belongs to. It next creates the HKU PS Drive, identifies the Current User’s SID, and then looks to see if the Current User’s registry contains the entries for the dynamic data, reporting back to the CI the compliance state a Standard Out result of YES or NO to identify if Remediation is needed.

2b) The Remediation Script performs the same dynamic data discovery, environmental preparation, then writes the data to the Current User’s registry.

2c) Although we’re writing to the Current User, the Settings item keeps the “Run scripts by using the logged on user credentials” unchecked.

2d) The Compliance Rules are configured to look for a YES value from the Discovery Script to indicate compliance, otherwise it will trigger the Remediation Script.

So Can You Use CIs Instead of GPOs?

Although you need to do a little additional work to accomplish the elevated privileges for the Current User context that GPOs give you natively, there isn’t anything that a GPO can do that one or more CIs contained and deployed within a CB can’t do. As we see above, there are things that CIs can do that GPOs can’t, or at least can’t do as efficiently. On the flip side, GPPs do offer easy graphical methods for accomplishing some tasks that would require you to write scripts to configure in a CI. Services are a good example of this.

Should You Use CIs Instead of GPOs?

There are scenarios where a GPO is absolutely the best option, especially if you’re using a GPO to mandate settings globally or to large population of objects. But for targeted administration, CB are a much better option. Consider these reasons:

Role Based Administration (RBA)
If you follow the principals of Least Privilege, then you are familiar with setting up users to not be Domain Admins, and instead putting them in various security groups and granting them delegation rights to various OUs. AD doesn’t make keeping track of who has rights where easy, and an admin may find themselves frequently making permission changes to accommodate active GPO deployments. As RBA in ConfigMgr is straightforward to configure, and access to User and Computer objects is not confined to AD Locations, CBs become easier to develop and deploy than GPOs with significantly
reduced administration and much more visibility/tracking.

Testing
If you want to prove out a GPO, most environments have GPOs created and deployed to test OUs, devices moved into those OUs, and once finalized, either the new GPO is linked to a production location, or those settings are duplicated in an existing GPO. Client-side, testing can get convoluted as you need to weed through all the GPO settings applied to the device and user, and then track down errors using the minimal logs available. WIth CBs, you can easily deploy out to a test Collection, use the Configuration Manager applet to trigger the CB individually as needed, quickly view action reports on the client, review the native logs in real time – and even have your CIs generate their own logs. When ready to go into production, just deploy to the desired collection.

Centralized Reporting
You just don’t have a dashboard view showing you what computers applied your GPO and which didn’t. CBs give you that immediate overview, and granular identification down to the specific CI setting that did or didn’t apply, and why.

Micro Targetting
WMI Filters and Security Filters on GPO can go a long way in targeting Computers or Users, but CBs can be deployed on the fly to any collection of User or Computer that match any query-able condition, or that are specifically identified by an administrator. No changes within the Active Directory needed.

Complex Settings and Remediation
GPOs are really designed for static and simple configurations. GPPs help a bit with the ability to add conditional logic to items, but don’t address dynamic data and can’t really make decisions based on the local environment. CBs can be as simple or complex as desired, by making use of both the built-in configuration options as well as the more advanced scripting environment.

Less Overhead
Computers have to slog through the evaluation and possible application of all GPOs visible to it. So if you have 10 GPOs all with WMI Filtering and/or Security Filtering, and your computer can see all of them but is only going to process one, it has to evaluate them each and every time GPOs are processed. That processing tends to happen on boot and user logon. With CBs, you’re specifically targeting devices, thus the devices are only evaluating those CBs specific to it. CBs are also processed during the normal functioning of the ConfigMgr service, thus you don’t have the hit that comes with startup processing.

The last reason I’ll point out is that you have ConfigMgr in your environment, probably already controlling application delivery, software updates, scheduled routines, and asset inventory. Maybe you’ve already dived into device management, Internet based client management, and license tracking & management. You’ve identified administrators already who’s job it is to manage the life-cycle of your computers. Application and environmental configuration of those computers falls squarely in that realm, and leveraging the tool that is built exactly for that purpose simplifies your administrative model and improves the efficiency of administrative management.

Share This Post

A senior architect with over 16 years of experience in desktop design, delivery and production management. 14 years of law firm-centric experience in developing, integrating, and implementing robust and full-featured desktop solutions, focused on solid Microsoft and Microsoft partner platforms crafted to deliver an optimal fit for the environment.

Leave a Reply