As I’m escaping the windowless lab a client has set up for me and seeking refuge in a well lit cafe for lunch, I chance upon a colleague who works for a national consulting firm. I invite him to join me, and after the initial pleasantries we start discussing our current projects. To our mutual amusement, we’re both working on Configuration Manager 2012 (ConfigMgr) implementations with desktop deployment components. It is at this point that my colleague shares with me that he is in a bit of a battle with his client over the installation of user assigned applications. User Device Affinity (UDA) is implemented and ConfigMgr Application and Package objects are assigned to User collections, and users do automatically obtain those application when they log onto a managed computer. The argument turns out to be over building new computers; even when the intended user is configured for UDA association on the computer being built, under my colleague’s OSD configuration, those applications are not installed during the build task sequence, but do drop down when the user first logs on. The client would like the applications installed during the build process instead of making the user wait for the desktop to finish building, or having to have the builder pre-log on as the intended user after the build. When I asked my colleague what their response to the client was, my colleague just shrugged and said "that’s just the way it is".
As I wanted to keep lunch pleasant, I neither disagreed with my colleague nor mentioned that their client was right. My opinion has always been that desktop deployment, be it Lite Touch (LTI) or Zero Touch (ZTI) is only complete if the end result of the automation is a computer you can hand to the target user and have them use it right away. No 12 page post-build to-do list for the builder, no logon and wait while an application eventually installs – just build and go. Unfortunately, I hear my colleague’s story more times than I care to admit, which surprises me as I also hear his client’s request at almost every OSD project I’ve been on.
In this post, I’ll cover a quick and easy way to automate the installation of user-specific apps into your build. The nice thing about this approach is that it works with your existing ConfigMgr environment, allowing you to keep whatever deployment and/or UDA structure you already have setup intact.
This is the shopping list of items that are needed to implement this solution. Some you will already have, some you may need to implement, and with the exception of ConfigMgr, none are going to cost you.
- ConfigMgr 2012. This solution is also possible with SCCM 2007 (or even just plain MDT), but my descriptions will match any ConfigMgr 2012 version (RTM, SP1, & R2) although I am personally using R2.
- Microsoft Deployment Toolkit (MDT) 2013. If you're below ConfigMgr 2012 R2, you may still have MDT 2010 Update 1. This is completely fine. If you don't have it at all in your environment, now is a great time to add it as it includes many useful features not in ConfigMgr. The entire how-to for MDT is beyond the scope of this article, but as long as you have it installed on your ConfigMgr server, you've run the "Configure ConfigMgr Integration" component of MDT, and you've taken the initial step of creating a Deployment Share, you're where you need to be for this article.
- A server running IIS that you can add a web site to, port unimportant. I don't recommend this being on your ConfigMgr server.
- A Web Services (WS) web site. If you've no background in web development, don't freak out here. In an rticle I posted, A QuickStart Guide to Using Web Services in MDT / SCCM, I cover obtaining a pre-configured WS site, or using Visual Studio Express (VSE) to create your own. The article shows its age a bit by referencing VSE 2012, but VSE 2013 is also available for free as well and you can use that. The steps to creating a basic WS page are all in that article, and example code for functions are in this article. My suggestion here is to keep the .NET Framework version low. Nothing the WS site is doing for this article requires any cutting edge features, and as you're also dealing with IIS, set the WS to use .NET Framework v2 or v4 depending on your IIS capabilities (shoot for 4). Second, be careful of your architecture. Make note if you're developing a WS that is 32 or 64 bit as you'll need to know this for IIS configuration. This article assumes that your WS will be 64-bit. My WS in this article is also ASP.NET written in VB.NET, and has the VS created App_Data folder included.
- A Service Account which you will use to provide the generic access for your WS site, connection back to MDT, and read-access to the Active Directory. A pretty generic standard user can be used, and it doesn't need the ability to log on locally so this can be a relatively unsecure account.
As you’re reading this article, I’m assuming you already have ConfigMgr setup with at least one OSD task sequence. Of course during any development, you’ll want to not use your production task sequence.
The key component we’re using from MDT is it’s database. If you haven’t set it up in your current environment, you’re just an easy wizard away from having one (within the Deployment Share, under Advanced Configuration, right-click Database and select New Database). Best practices will have you not installing this on the same SQL instance as ConfigMgr. That said, I’ve plenty of clients that have used the same SQL instance for ConfigMgr, MDT and WSUS. Usage and capacity planning are big components in your decision of where to put the MDT database in production.
Once you have implemented your MDT database, you will need to alter it’s security by adding the service account mentioned earlier. If you’re only going to use the WS to read data from the MDT database, then "db_datareader" is all you’ll need. If you think you may eventually use WS’ to write as well, add on "db_datawriter".
At this point, all you care about is a ‘Hello World" function, so if you’ve created just a blank WS site, we’re good for now. It’s more important to get the security set correctly. I personally like to keep my WS site as stand-alone as possible and not dual purpose (combined with a self-help portal, or KB site, etc.). As such, my configuration reflects this isolation.
- The NTFS folder that will hold my WS site is configured to allow my service account full control over all the files and folders. If you're concerned with security, you can adjust as necessary, but keep the security on App_Data to at least Modify so you can write error logs for troubleshooting (I don't cover that in this article, but it gives you a good platform for later).
- I use a dedicated Application Pool, configured with the matching .NET Framework version, and set as Integrated for the Managed pipeline mode.
- My advanced settings disable 32-bit applications, and have the "Load User Profile" set to false. No other configuration changes.
- For the WS site, you can use the default port 80, or select another port. This article uses port 80 (which means you wont see a port reference for the rest of this article).
- For the WS Site, under IIS settings, Authentication needs to be configured to allow Anonymous Authentication, and ASP.NET Impersonation.
- Use your service account as the Specific User for ASP.NET Impersonation
- Although this can also be configured manually within the web.config file, you need to define the connection string between your WS site and the MDT database. Under the WS Site, ASP.NET, within the Connection Strings section, you can either specifically configure the SQL connection, using the service account as the credentials, or you can create a custom string:
- Data Source=[SQLServer];Initial Catalog=[MDTDatabase];Trusted_Connection=Yes;MultipleActiveResultSets=True
Lets spend a minute talking about the methodology, then we’ll get into the gears. First off, we need to talk about the Active Directory. Despite it’s name, an Organizational Unit is not really for organizing, beyond the absolute physical basics, such as location or object type (user, computer, etc). It is certainly not your foundation for associating a job title to your users. Web based services for the last decade have proven that Tags are better than Folders when it comes to organizing almost everything. Don’t put that vacation picture of you and your significant other in a folder called "Cancun", tag it instead with "vacation", "Cancun", "Hangover Pics", etc. so you can find it multiple ways. Same in AD: yes, that computer may reside in Florida, so you put it in a "FL" OU, but that user, Pat, is not only a receptionist, but also your accounts receivable, accounts payable, and office manager. As you can’t have Pat’s AD User object in multiple OUs, keeping Pat in a generic "Users" OU under your "FL" OU is great – just don’t further box it in by trying to put it into a sub-OU called Receptionists or Office Managers. Obviously "Folders" is the metaphor for OUs. "Tags" is the metaphor for Security Groups. Put Pat into a Receptionist, Accounts Payable, Accounts Receivable, and Office Manager security group.
ConfigMgr reinforces this. Although you have the full ability to use the legacy AD properties, such as OU name, to create collections, User Groups are now the go-to thing for easy user collection definition. Creating a collection that includes all users in your Receptionist security group is as easy as finding the Receptionist security group within the ConfigMgr "All User Groups" collection, and adding it to a new collection. Done in less than 10 clicks with minimal typing.
So with the above said, in general production, lets say that you need to deploy an application to all users in a specific department. That user should have that application available everywhere the user logs on. Simple enough to make this happen: create a collection that targets the appropriate user group(s), create an Application object (or package object), and deploy it. When the user within the security group/collection definition logs onto a computer, the Application (or package) is installed.
Usually, it takes more than one app to satisfy the needs of a user’s position, so it is certainly possible that you will have multiple applications deployed to the same user group collection. Back in the good ‘ole days when employee’s were all single-focused (OK, maybe only in the eyes of IT developers!) this grouping of various applications to make up a user position software requirements would have been called a "Role". It was not uncommon then to have "Role" based computers where all the applications to meet the needs of a single role would be installed at time of build. But that was computer based, and really single-user focused – both of which are concepts quickly fading away. What happens when your user is in multiple "Roles", or that computer is shared by multiple users?
So realistically, your multi-purposed employee that requires apps to satisfy many "Roles", has a deployment flowchart that looks like the above. Although you may be lucky enough to be in an industry that requires few apps, trust me when I tell you that I can’t remember the last time I walked into a client’s office and that applications list had under 200 lines on that spreadsheet. Therefore, the tons-to-one scenario is an absolute reality – now go back to deploying a new computer with 20 core apps. Do you really call it "done" when the first time Pat logs on 20 more apps are delivered?
(You’re thinking we’ve "wandered" a bit off topic – keep reading, we haven’t.)
ConfigMgr 2012 introduced User Device Affinity (UDA). New to ConfigMgr, not really new to anyone else who deploys workstations. In this, either ConfigMgr via observance of user behavior, a systems administrator via the ConfigMgr console, or the user themselves at the local computer, can define a relationship between a specific computer and the user. The principal here is that you’ve taken the mass of Applications/Packages that are deployed to a user, and preemptively told ConfigMgr where that user is going to log on (over-simplified – UDA has many other aspects that I am well aware of, but which are not applicable to this post). To that, if ConfigMgr knows that Pat is assigned to Machine A, then given enough time, those apps that are deployed to the security group based collections Pat belongs to are installed onto Pat’s machines ahead of time. But not a controllable "ahead of time", which is the problem faced in deployment: "ahead of time" never happens during an OSD task sequence.
What is controllable is discovery of the user assigned to the computer being built. If UDA has already been defined beforehand, then it is a quick query of ConfigMgr. If it hasn’t then it is a quick query of the administrator building the computer. Once you know the user, you can then discover what the user needs.
Avoid the rabbit hole. I’ve re-read this a few times, and I can see that you might be thinking that the solution is to query ConfigMgr with the known user, find what collections they are in, find what Applications or Packages are assigned as required deployments, and then track back the Application or Package/Program to the required identifier needed to install it within the task sequence, and go from there. If you’re ever board, try that. You’ll abandon that right quick. Don’t forget that you can have multiple users associated with a single device. There’s a lot of math there.
This is where we bring in the MDT database. There is (what is rapidly becoming vestigial) an object within the Deployment ShareAdvanced ConfigurationDatabase called Roles. The principal is simple: define a role, define settings for that role, define Application/Packages for that role. If you’re MDT only, then within the CustomSettings.ini, the Gather task finds the info associated with the Role, based on the role name you pass to it. Short of jumping through hoops, you’re doing a one-to-one with the native tools. But if we throw out the concept of Roles, and start thinking about AD Security Groups, then a new path is clear: Create a "Role" named the same as the Active Directory Security Group. Add the Applications and Package objects associated with that group to the MDT Role object. We then have a list of applications/packages to install when we encounter a user that is an associated AD Security Group and the database becomes a many-to-one.
Yes, there is extra work with this approach. You must maintain a listing of Applications/Packages you want associated with an AD Security Group both in ConfigMgr as well as MDT. But there are benefits to this approach, the biggest being that within MDT, you can define the order in which applications are installed (per Role). The second is that obviously, you’re obtaining the Applications/Packages list without any user being logged in, and thus it can happen without direct user interaction.
Lets bring it home:
- The Task Sequence discovers the intended user(s)
- For each UDA assignment, discover the AD Security Groups that user is in and combine into a general list
- Query the MDT database, and for each matching MDT Role, look for Applications and add to a List
- Query the MDT database, and for each matching MDT Role, look for Packages and add to a list
- Dynamically install found Applications
- Dynamically install found Packages
Through the magic of ConfigMgr, when the build process is complete and the installed Application / Package objects are evaluated against the intended user object at logon, no further action is taken, eventually making the build complete upon delivery and the user is free to go on without further wait. None of which was dependent upon having the target user logging on.
As I mentioned before, there is an administrative effort involved in keeping the MDT database "Role" up to date with the required ConfigMgr Application names and Package/Program combinations. I’ll not lie, it takes remembering to update both ConfigMgr and MDT as things are added, removed, or the Application name is change. You also must be very aware of ConfigMgr object configuration: makes sure package/program and application objects are allowed to be installed via task sequence without advertising or you’ll encounter failures.
Technically, you don’t need the UDA info to make this all work. You can just ask the builder at deployment time, use the MDT database to associate a user ID with the specific computer via the Computers section, or some other method. But when done right, using ConfigMgr has its advantages. Here are my thoughts:
- If you let ConfigMgr automatically assign users, you will be let down. ConfigMgr makes determination of affinity based on usage, but that requires that the device have already been a ConfigMgr client and that user been active on it prior to deployment. If you opt to not disrupt a user by building them a second computer and then swapping for their existing when the build is complete, no UDA. This obviously doesn't work either for new computers out of the box.
- You can assign UDA via the ConfigMgr console, but this is a forethought action. If you don't make the association in advance, and make a habit of continuing to keep the UDA associations up to date, you can get odd results if UDA for a device contains legacy users. There is also more work for new computers out of the box as you would need to first import the computer object and then make the UDA assignment.
- Use ConfigMgr UDA in conjunction with asking for a target user(s) at build time. This is my favorite option, although you need to make other arrangements for any ZTI build. In short, if a UDA exists, present it to the builder as the "default", but otherwise the builder specifies the user account which UDA should be assigned. The ConfigMgr task sequence will make the back-end assignment during the build for you.
Another reason I prefer asking the builder when possible is that obtaining the assigned UDA vs. the "top users" from ConfigMgr seems programmatically difficult. To demonstrate this, in an environment where UDA is administratively defined and not automatic, from the ConfigMgr console select a well-used device, and "Edit Primary Users". You will see the users of the device in the last 90 days, and any assigned primary user(s). If you haven’t assigned one, do so. Then drop to PowerShell and use the Get-CMUserDeviceAffinity commandlet and specify the same device. In my case, I consistently get both the highest login count user and the assigned primary user. This is obviously incorrect and if that information had been used during my build, I would have incorrectly installed software.
For the record, I have a WS function that makes a SQL query against ConfigMgr to determine what the correct UDA list is. My query is below and I welcome anyone with a better solution to comment it here (please) because I don’t love my solution.
WQL to find Machine ID:
SELECT ResourceID FROM SMS_G_System_PC_BIOS WHERE SerialNumber=@SerialNumber
Then SQL to find the UDA list:
FROM v_UserMachineRelationship AS UMR
INNER JOIN Users AS U ON U.FullName = UMR.UniqueUserName
INNER JOIN v_UserMachineTypeRelation AS UMTR ON UMTR.RelationshipResourceID = UMR.RelationshipResourceID
Next, make sure that all the packages and applications you plan on deploying during the build have been configured to allow deployment from a task sequence:
- [Application] PropertiesGeneral InformationAllow this application to be installed from the Install Application task sequence action without being deployed
- [Package][Program] PropertiesAdvanced tabAllow this program to be installed from the Install Package task sequence without being deployed
Prep the MDT Database
The link between MDT and the Active Directory is creating MDT Roles whose names match AD Security Group Names. Specifically, cn names. I know this isn’t a very stable link in that one change to the AD Security Group cn property and the link to MDT is lost (it would not be that monumental of an administrative effort to program a process by which the AD is monitored for changes and the MDT database updated accordingly, but that is beyond the scope of this article), but it is the easiest human readable property of the group that you can use. So for each departments defined by AD Security Groups, say "All Receptionists", "Office Managers", and "Accounts Payable", create like-named Roles in MDT.
Some of you may be wondering about nested groups: my user is in "Dallas Receptionists", which is a member of "All Receptionists", if I just target "All Receptionists", will my user get it? It’s true that a direct query of the user object only brings back the immediate groups the user object is in (sample PowerShell code below):
(Get-ADUser -Identity [UserID] -Properties MemberOf).MemberOf
No worries, you don’t need to create a zillion roles to accommodate this. Just the top level group you want to target. We have easy code to resolve this issue.
Next, within each of the MDT Roles you’ve created, assign the ConfigMgr Packages and/or Applications you want associated and automatically installed for users within the respective groups. Don’t worry about overlap; if a target user is a member of multiple groups that contain the same Package/Application, we’ll programmatically ensure that the final lineup is unique. Do make sure that you include any dependent Packages/Applications as well as the main one in the lineup, and that the order within each group is as required by the installations. Also ensure all your packages don’t reboot after installation. If they do, we won’t be able to recover during the build.
Deploy the Web Service
Before I go into this, I need to discuss environmental setup and some gotcha's associated with how I've implemented. In all my environments that I set this up in, web services play a big role. I use WS during the build task sequence to handle various data exchanges and lookups, during the installations of some software, during the device's production lifecycle to grab up-to-date data for users, as well as other automations outside of desktop management. My described configuration in this article makes it easy to handle AD, SQL and ConfigMgr transactions by running everything under a specific user context. For almost everything I do, this works flawlessly. But it has the problem of a Kerberos Double Hop which comes into play when performing some AD activities. The easiest work-around for the nested group issue I mentioned above is one of those activities. So when setting up your environment, you may find that the web service fails to complete the below action when run "normally". The solution is Impersonation. Although beyond the scope of this article to fully explain, the short of it is that we can use Impersonation to run part of the function under a single hop instead of a double, thus eliminating the issue. Microsoft has a nice KB article on it that goes so far as to actually provide you the code needed to create the impersonation class. Doesn't take a lot of time to implement, and saves you a lot of hair-pulling. My code shows it, but you may find that you don't need it.
The differences in pulling Package and Application data from the MDT database comes down to two little pieces of information: SQL View Name and SQL Field. If you’re pulling information on Packages, then the MDT view is "RolePackages" and the field in question is "Packages". For Applications, it is "RoleApplications" and "Applications". I point this out now because the Web Services code is used for both, only differing in the SQL call. So my below sample code takes that into consideration.
Discovering the User’s Group Membership
With a user's network ID, we can use the .NET System.DirectoryServices.AccountManagment class to obtain the list:
- Lines 02 and 23 define the block in which we're going to run our kerberos-sensitive action. It uses the same service account that the IIS Application Pool is using for impersonation. It is using a class I've added to the project called Impersonator which is pretty much a copy & paste from the above cited MS KB. Again, if you don't need this in your environment, you can remove these two lines.
- Lines 03, 04 and 06 obtain the user via the logon ID from the Active Directory. This object type is different from a DirectoryEntry and will allow us to better obtain the groups list.
- Line 08 uses the GetAuthorizationGroups method to obtain all the Principals of the groups that the user is either an direct or indirect member of. It is with this call that we can limit our MDT Role to just "All Receptionists" even though the user is only a member of "Dallas Receptionists" which is a member of "All Receptionists".
- Lines 10 to 23 are slightly more convoluted than they should be, but they fix an known issue with the GetAuthorizationGroups return. You actually can just loop the group Principals and (if they have one) note their name (the 'name' property is the cn property), but there is a known bug in the class by which if anywhere along the chain a group was deleted in AD but the reference still exists in the membership of other groups, the deleted groups SID cannot be resolved and the code crashes. So what this block does is loop the individual elements and if they have a cn value, and we haven't already added that value to our list of user groups, we record it. This works and not the Principal looping because when you loop the elements, the resolution of the SID to the cn takes place at the time you access the element, whereas the resolution takes place for all objects at the same time on the first object access if you loop the principals. You can catch the former when it errors, but not the later, hence the workaround. Line 16 is a bit of overkill as well, as the returned list is distinct. But, one never knows and always easier to catch a possible error than deal with troubleshooting crashing code. As you can see on line 21, if the element's SID to cn translation fails, we don't care.
- On line 16, you may also notice that everything goes to lower case. The default comparison is binary, not text, so this resolves "A" not being equal to "a".
At the end of this call, we now have a list of all the group names that user is directly or indirectly a member of. Next, we want to get a list of the defined MDT groups:
- Nothing unusual here. We connect up to the MDT database and grab a distinct list of Role names. Your SQL query (line 36) needs to pull distinct values because there is nothing in MDT that keeps you from having duplicate role names. MDT doesn't use the Role name as the identifier, so it is up to you to keep the names distinct. Also note that the SQL query needs to pull from the correct view: RoleApplications or RolePackages. In this manner, you only get the list of Role names that actually have Package or Application data associated with it, reducing your processing time.
- Line 40 just records the distinct Role names. You may also notice that everything goes to lower case. The default comparison (line 39) is binary, not text, so this resolves "A" not being equal to "a".
Now we have our list of groups the user is in, and the list of MDT roles that have Packages or Applications associated with them. Next, we need to pull the Package or Application data from MDT.
- We only care about the groups that the user is a member of. So, in line 53, we only loop those. In line 54, if the user group was discovered in the MDT Roles list, we process it, otherwise we skip.
- Lines 55 to 66 do the heavy lifting. Query the MDT database for a list of Packages or Applications associated with the Group/Role name, and if unique, add to the list of Packages/Applications.
- Note that the SQL query on line 51 sorts the return data by Sequence. This is important as you have already via the MDT interface put into the required order your installations. This keeps that order.
- Also note that we're going to return an ordered list of Packages/Applications. The order is not in the Group membership itself, but the object installation sequence. That the Package/Application list contains only unique entries means that if you had duplicate prerequisite installs listed in both Role A and Role B, they will be listed when either Role A or Role B is processed, but not again when the other is processed. As the removed installs were already higher in the list, they don't need to be installed again. Remember you can't control which Role is processed first, so ensure each MDT Role listing has all the required installations.
This leaves you with a list to return. Your Web Services function should be configured to return an array of string values, so your last line needs to convert the List(of String) to an array:
Obviously, there are going to be many situations which the above code sample doesn't handle: Users without any group membership, no matching AD Groups to MDT Roles, and no Packages/Applications to return. It is not hard to detect this during each of the steps, and the code only has to return Nothing in that event to work.
When you test your Web Services function, you will get something similar to this:
– <ArrayOfString xmlns:xsi="[URL]" xmlns:xsd="[URL]" xmlns="[URL]">
– <ArrayOfString xmlns:xsi="[URL]" xmlns:xsd="[URL]" xmlns="[URL]">
Configure Your Task Sequence
Our task sequence is going to reach out during the build to our Web Services URL with the network ID of the target user(s). How that user is identified to the build process was discussed above. Where it is stored in the task sequence, however, is important. A task sequence variable called SMSTSUdaUsers is used store the user information ConfigMgr will eventually convert to a UDA assignment. It is from this variable that we can extract the single, or comma separated list of user(s). For each user defined, we make the call to the web service, get the returned list, and process the results.
PowerShell makes obtaining web services information very easy, so lets look at the script logic:
- Line 01 connects to Task Sequence environment, required to get and set variable data.
- Line 02 creates the web services object that we'll need to make our web call. I've provided a dummy URL; your address and web services page name will differ.
- Line 05 will loop through our list of DOMAINUSERID users. This structure will work if there are no users within the variable, a single user, or multiple users.
- Lines 06 and 07 are the "heavy lifting" lines. They make the actual WS call. In my example here, my web service has two functions, one for obtaining the ConfigMgr packages and one for the ConfigMr applications. My WS' code for each function just calls another internal function with the correct SQL table and field names, making me do less programming on the client. I also pass the name of the discovered user. My return is going to be an array of strings (or nothing if the user didn't have any objects associated with any of the groups they are in).
Our task sequence is going to use two different task objects to accomplish our installation, an "Install Applications" and an "Install packages" task. For both, instead of defining a specific application or package/program, we're going to select the other option: "Install [software packages|applications] according to dynamic variable list". If you're not familiar with how this works, what happens is that the task sequence install task, instead of using the defined application or single package/program normally selected will instead look for a task sequence variable with the base name specified and an incremental numeric value. For example, if your base variable is called USERAPP, the task sequence will look for a task sequence variable called USERAPP01, and if found, will install the application defined therein. It then looks for USERAPP02, and if found, installs. It continues this cycle until no information is returned to it when it queries the variable. So if you had 8 applications defined in USERAPP01 through USERAPP08, when the task sequence looks for USERAPP09 and finds that no is there, it concludes the task and moves on to the next task in the task sequence.
Our script's next task is therefore to convert the returned array of applications and package/programs into the task sequence variables.
- A quick note about the function defined from line 10 to line 23. When using a dynamic variable list for Package/Programs, ConfigMgr continues the format defined in SCCM 2007 of BASEVAR###. For applications, ConfigMgr changes to BASEVAR##. A little annoying, but not a huge deal. As you have to pad the index with leading zeros anyway, the function will make adjustments depending on how many zero's are required.
- The routines defined in line blocks 25 to 30 and 31 to 36 do the same thing: add a task sequence variable with an incremental index number containing the Application name or Package/Program ID to the task sequence.
$SMS.Value("UDAAPPS02") = Application Two
$SMS.Value("UDAAPPS03") = Application Three
Not immediately obvious in the snippet is that if we did not obtain any lists for applications or packages, the task sequence variables are not created. This is an important note because in your task sequence, if you use one of the install tasks with a dynamic variable defined, and said variable doesn’t exist, the task sequence fails on that task. You can obviously configure that step to continue on error, but that is too general a catch-all and can result in letting failed applications not stop the build as well. How to handle it is below.
Within your task sequence, simply configure three new steps:
- Run PowerShell Script
- Specify the name of the PS script that has our routine
- Specify the package the PS script is in
- Configure the task condition to only run if the task sequence variable SMSTSUdaUsers exists
- Install Application
- Use the dynamic variable option, specifying the base variable you created and set in the PS script (example: UDAAPPS).
- Configure the task condition to only run if the task sequence variable defined as the base variable at the first index exists (example: UDAAPPS01)
- Install Packages
- Use the dynamic variable option, specifying the base variable you created and set in the PS script (example: UDAPKGS)
- Configure the task condition to only run if the task sequence variable defined as the base variable at the first index exists (example: UDAPKGS001)
A quick note. If you're not at the R2 level of Configuration Manager, make note of KB 2913703 as this impacts installing dynamic applications. Also, the Run PowerShell Script task only exists in R2. Use the Run PowerShell Script task from the MDT action list instead.
As we’ve seen, the process is relatively straightforward. Determine the user(s) for a computer, discover what AD groups they are in, discover what applications should be installed for those groups, and then install the applications via a dynamic install action. Like most automation, the bulk of the work is administrative: configuring the MDT Roles to match AD security group names, populating the Roles with the appropriate installation objects, and maintaining both the Role names and the applications references therein. The heavy processing is done server side by a web service function, removing the bulk of the work from the client and resulting in a reduced task sequence run-time. The dynamic nature of the process also greatly reduces the size and complexity of the task sequence.
The needs of the target environment ultimately shape how this implementation is done, and it is also up to the implementer to know how much additional automation and environmental monitoring is needed. If you’re already using MDT to dynamically install items such as location-specific applications or Make & Model specific utility packages, then you’ll find that this process isn’t that dramatically different.