Patching Standards or Objectives

When you deploy the monthly security updates, what success rate is your goal? Sure, we'd all like to patch all of the machines every month, but that usually isn't practical. As in many areas, the last one percent can cost as much money and resources as the first 99 percent. The last one tenth of one percent might cost as much as the first 99.9 percent. What standard should be "good enough" for your organization? You should try to make an objective analysis and decision about this. That will help you decide how to handle the real-world variations we face each month.

Risk Levels
The first consideration must be the risk levels associated with different updates and categories of devices. If there is an update for a vulnerability you can't protect against through other means, like the firewall, and it could do serious damage, your standard for that particular update might well be 99% or higher. That's especially true if there are exploits in existence. You want to get that behind you, and add the update to the build process to protect in the future.

Systems in your DMZ have far greater exposure to risk than other devices, so the standard for those probably should be 100%. They should be patched at the first opportunity, after testing the patches on similar machines in the lab to assure they won't break normal operations.

Your organization may have other machines that are so critical that nothing less than 100% is acceptable. In some organizations that might include all servers. In others it might be the DCs, Exchange, and database servers. You might also have workstations that are so critical you don't want to take any risk with them. The wire transfer department in a bank comes to mind, along with workstations that perform high-value financial operations in any organization. What could happen if a hacker got control of a workstation in your Call Center? What confidential information could they get at? What about the computers belonging to senior executives?

Laptops are a special category unto themselves. They are commonly exposed far more than desktop machines inside your firewall, and patching them can be a major challenge. Yet, in many companies, they are the primary source of infections getting into the corporate network.

What resources do you have available for patching? What else do you need them for? Only the largest companies normally will have any staff dedicated to patching. More often they also perform application deployments and other activities. What resources can you take, at what cost or impact, in case of an emergency? This must be thought through in great detail so managers can make informed decisions on the trade-offs.

The trade-offs are likely to vary in different months. One month you may only be 80% patched because of an issue with an Office patch, so there's a high value to added time spent on patching. Another month may have a co-worker out sick, so the value of time helping with deployments is greater. In that case you might return to patching after the co-worker returns.

There might be staff in other teams or departments that could be borrowed to help with an emergency that required 99% patching in a short time. They must be people that already have the necessary security permissions and familiarity with the tools they would use, such as psexec.

You should have a feel for how many of the normal application deployment staff are needed for installations that can't be delayed, even for emergency patching. If you take some of the others, what will be needed to get caught up with their normal duties? If they'll need to work overtime, will you compensate them in some way? If you don't, they will resent patching if that occurs more than extremely rarely.

What are the opportunities for using automation, new products, etc to improve how many machines can be patched without manual effort?

If you use SMS, the biggest part of this is probably client health. Anything you can do to minimize client issues will help patching success rates. Just detecting and handling issues throughout the month instead of during patch deployment can make a huge difference in what can be accomplished with the available resources. Scripts available in myITforum and Dudeworks can help with this.

If you have a significant number of machines that can't be patched with your automated tools, you should have an active project to resolve whatever issues exist. If these are machines where the users just say no, see my blog Patching "exception" workstations for more detail on this subject. If the problem is the OS or the applications running on the machine, someone should be responsible for rebuilding the machine, replacing the applications, or otherwise resolving the issue. Senior IT management should be made aware of such machines, the plans for eliminating the issues, and the risks associated with any that won't be resolved within the near future.

Remote machines including traveling laptops, home offices, and small offices may present other issues. There are many possible solutions such as logon scripts, products such as 1e Nomad, etc. Sometimes the solution might be just developing processes that minimize the manual effort required. As an example, consider a company with a small remote office served by a WAN link that can't handle patch updates during working hours. You could create a scheduled task that will copy the patch files to one machine after hours, then a task on each PC there to run the updates the following night, followed by a reboot, after the file copies were verified. That would be far more efficient than patching each machine individually after hours using psexec or some other method.

The best way to attack this is to start by analyzing the machines that aren't successfully patched through your automated tools, and developing categories. Then identify the categories that can be solved most easily and work on them. At the same time, get appropriate people working, or at least thinking, about solutions to the other categories.

It's vital to make sure that the appropriate IT and business unit management, and the security team, understand the standards. You never want to face machines being compromised and someone saying, legitimately, "but I thought you were patching all of the computers." This needs to be a deliberate decision, in consultation with appropriate other people.

Overall, this is a cost-benefit and risk management analysis much like managers do all the time. The real dangers come from having to make rush decisions because things weren't considered in advance, or from having to explain why 2 percent of the workstations were attacked successfully because they weren't patched. Proper planning and communication can minimize those risks.


Published Thursday, July 19, 2007 8:00 PM by spruitt


# Patching Standards or Objectives

Friday, July 20, 2007 2:17 PM by System Center Web Log by Dan Conley

Steve Pruitt has created a great blog post this topic. Check it out:

Powered by Community Server (Commercial Edition), by Telligent Systems