May 2007 - Posts
Summary: computer management is a mutli-disciplined art form that is very computer intensive. You need a lot of applications open at any time to do the job right. How do you keep an eye on all of them? With the ultimate Command Central, of course! Your own up close and personal bridge of the Enterprise.
I did a fair bit of web searching to find some kind of monitor arm that would support 3 LCD displays of a good size. Everything I found was fairly expensive ($500 to $1000) and they were very ambiguous as to how they would get the monitors to exactly the position I wanted, without being attached to a wall, and in a solid manner that would survive a Seattle earthquake.
So I figured it was time for a do-it-myself project. For about $250 it's done. And dozens of my fellow Microsoftie computer geeks have paid it great compliments - everyone loves it (most of all me, since I get to benefit from it). Ergonomically it's sweet. Full computer management sense-surround precisely where I want it. The desk is freed up for tons of papers (which I keep a lot of in my office, as you can see). No ugly piles of phone books or sheaths of paper to try to get the monitors to about the right height (and from which monitors will topple with the mildest earthquake or similar emergency).
About 5 people have called it "scary", as in something from a science fiction movie ("Brazil" was mentioned a couple of times). The picture below may explain that. Another bunch of people have suggested I should start a business selling these things. I can dig being entrepreneurial, but I don't want to get too far away from my first love - computer management. So I've decided to 'open source' the details - anyone that wants one can build it themselves.
BONUS DISCLAIMERS: I'm not a mechanical engineer so I can't guarantee that it will surive an earthquake. Or that it won't fall on your head someday. Or fall over in the middle of the night and break your beautfiul monitors into a million pieces. It's also labor intensive to build it just right (say 2 man-days, and some brute force) and it takes some mechanical improvisation (so don't be surprised if you get it half built and then find you can't figure out how to complete it).
So here's the basic approach to building your own:
measure out the basic dimensions of where you want the monitors - height, depth, and spread. You won't get it exact right (reality differs from theory), but this will serve as a starting point for your shopping list
go to the hardware store's plumbing section. Find the galvanized steel piping. I recommend 1/2 inch dimension, but that's based on gut feel. Smaller might work and bigger might look more robust. Get:
1 long section (about 4 feet) to be the main piece from the ground, past the desk surface, to the high point)
a 4 way joint
a 3 way joint
4 long pieces suitable to the dimensions you measured. A shortish (8"?) one to the central monitor, two long ones (18"?) to the sides, and a medium one (12"?) to be the center support. Chances are you'll be surprised that you need to make some dramatic adjustments from your original plan, so buy a few extra of various lengths.
2 45 degree angle joints to connect from the center bar to the long sections going to the side monitors
a bunch of short sections of various lengths (shortest to about 3 or even 5 inches). You use these to fine tune the actual lengths to what feels comfortable. They're cheap and you'll need more than you expect, so get a bunch (10?). Otherwise you'll be making a bunch of trips back to the hardware store
about 6 90 degree angle joints. These make the right angle connections from the long pieces to your monitors. You can also lower or increase the height of any arm by throwing two of these in the middle (with a suitable length from item 6)
about 5 connectors. These are short pieces that are 'female' on both end. They connect straight pieces to other straight pieces and allow you to fine tune lengths using combinations of the above pieces.
5 floor mount pieces (those are the ones that attach to the monitors, but you'll also need one for the center support at the point it contacts the desk, and a similar one at the floor)
2 wood press clamps. I'm not sure that's the proper name, but they're usually used to clamp wood while glueing them together. One such clamp is the orange pieces in the picture of sample pieces
. One half of each 'binds' to the pipe and the other threads onto the end of a piece like your 4 foot pipe. The problem here is that you'll have to use it in the middle of the pipe (to clamp the monitor stand to the desk). So the binding part works well but the other part has to slide over the pipe. So you'll need a half inch clamp set and a 3/4 inch clamp set. More about this later. (but if you can just drill a hole in your desk and screw the base to the floor, then you won't need the clamps and you'll have an even more solid stand). And the clamps are about $30 each, so this is your second biggest expense
a package of 1/4" by 2" bolts. You'll need two bolts, and they probably get the most stress of all the items in the stand, so the stronger the better (automotive quality would be ideal, but I couldn't be bothered to go hunting for such bolts).
a package of least nine 1/2" long bolts (with nuts), of any small size. They don't get much stress so any size will work.
some very solid bungie cords, preferably black so they're not obvious
for the finishing touch, you'll want cable wraps to go over the pipes and to hide the monitor and power cables. I suppose you could paint or tape instead, but the cable wraps seem to give the rigth "scary" effect.
now go to a TV store and get 3 wall mounts for flat panel displays. You just want the basic ones that would allow you to mount the display directly to a wall - no pivots or anything fancy. I got mine at Sears but have seen them elsewhere. They're something like $35 each, so they're easily the biggest expense.
Assembly time! Screw all the pipe pieces together using the right dimensions for what is comfortable for you. Do a bit of drilling and bolting. Clamp it to the desk. Attach the monitors. Cover with cable covers and you're done!
ok, it's not quite that easy, but you're going to have to experiment to get the placements just right, so there's not much point in me being very prescriptive
when attaching the floor stand pieces to the flat panel wall mounts (see the rear-end picture above), you'll find that only one hole matches. So you'll have to drill two holes. The metal is soft, so that's easy
attaching the clamp to the longest pipe at the desk level is one of the trickiest steps. You'll have to drill the 3/4 inch half of the clamp, so that you can put the 1/4 inch bolts through the clamp and the pipe. The clamp is rounded and fairly solid metal so that will take a few minutes but any ordinary drill and drill bit will do the job. When you clamp that pipe to the desk you'll find that you can apply a lot of pressure (and thus get a great hold on the desk) but that's putting a lot of pressure on the bolts. So make sure they're well placed.
the clamp handle is too wide for tightening in the middle of the pipe so you'll have to use a hacksaw to shorten it up. Then use an adjustable wrench to tighten the clamp
for the right and left monitors the vertical force is partially against the threads of the pipes. That's probably a confusing statement, but it means that when first mounting the left and right monitors you may find they sag as one of the pipe joints pivets on its axis. Basically the threads aren't 'biting' enough. So you'll have to turn the pipe around that axis one full rotation. You've probably already tightened them pretty good, so that last rotation is going to be hard, especially since the fully assembled stand gets quite awkward
attach the bungie chords at the lowest level of the main vertical pipe (the one behind the desk) to the desk legs. Make them as tight as possible. I think of them as a backup solution in case the clamp ever gives. The monitors would cause the stand to fall forward and possibly to either side. The bungie chords would be counter force to hopefully keep the stand from crashing to the floor (but I don't guarantee that)
Hmmm. That actually sounds hard and complicated. But you're a techie - you do 5 hard and complicated things every day before breakfast. If you really want to build your own monitor monitor stand but need more details, let me know and I'll add some more.
p.s. Given the reaction so far, this project qualifies as TechSexy, but I'm not going to label it as such because I'm using that word in the narrower sense of meaning exciting technical SCCM improvements.
Summary: my biggest surprise and challenge in my years in Microsoft IT is how often I explain and debate the meaning of "client health". What's worse, people rarely agree with me. But I'm persistent, and seeing the data and the challenges up close gives me confidence. So let's start with those key definitions.
My key point is that "client health" means different things to different people. Or different things to the same people in different situations. As with reporting, you first have to understand what question is being asked.
Examples? Well, if I'm doing a software distribution, I want to know how many clients should run my 'all systems' advertisements by next Thursday. For software distributions I rarely need complete success immediately - a week or two is quite reasonable. But in a large production environment am I going to hit 100% in that time? No. I'm going to hit all the online machines, less the broken clients. But how many computers is that in that timeframe? It's going to vary from organization to organization. For example, at Microsoft we have lots of of test machines and mobile users. So historically I know that 75% of my clients are available in one week. 85% in two weeks. 100% in 30 days (at that point we purge them from the SMS database). So if I target 100,000 clients today, I should expect 85,000 clients to run the ad by two weeks from today. For software distribution, 85% of my clients are "healthy".
If I'm doing patch management and I have an emergency patch deployment to do, then 2 weeks is too long. I want to patch my whole environment by tomorrow. Again I won't get 100% of known clients, mostly because a lot of them will be offline for various reasons. And that's ok, because an offline machine can't be abused and it won't abuse other computers. But I still need to know how many machines are "healthy" in the next day to be able to judge the success of my patch management efforts. At Microsoft IT, that measure of client health is 55% of all known clients. If we patch 55% of known clients in 24 hours then we can be quite confident we've patched 100% of currently vulnerable clients, and that's all anyone can hope for (see my previous blog posting for the full story of patch mgmt at MSIT).
Patch management and software distribution are key computer management services, so they're an obvious place to start for "client health". But another obvious "client health" definition is "broken clients". Computer management software breaks like any other software (or anything else) and we should fix it. In fact, broken computer management software is trickier because it's less likely that the users will report the issue. If Outlook breaks, users will either fix it (if they know how) or call the helpdesk - they won't tolerate not having e-mail for long. But how long will it take them to notice their computer management software is broken? (this depends on how many options they have to get software, and how often they need to change it). Maybe the users would rather stay with 'old faithful' and thus will be glad it's broken. So we techies have to go looking for broken clients - we can't wait for escalations.
How do we find broken clients? Well, inspecting each one is not likely to be feasible. Instead we'll look at the server for clients that haven't reported activity lately. But 'clients that haven't reported activity lately' ("unhealthy clients" in the broad sense) will include machines that are offline and machines that are reporting to broken servers. So to find broken clients we're going to have to eliminate server-side issues and off-line clients. Those are non-trivial challenges, but well worth solving when we want to fix broken clients (as opposed to maximizing the success of patch management, software distribution, or asset management).
A further wrinkle is that computer management clients don't always break completely. They might report inventory and status messages fine, but not run software distributions or patch management. Are those broken clients or not? If you're an asset management admin, they're not broken, but they are broken if you're a software distribution or patch mgmt admin.
That should be enough variations on "client health" but there's another key story: "client coverage" or "client reach". If you don't have the computer management client on every computer in your organization, are you managing everything as well as you should? Certainly not. But getting the client on every computer is a very different problem from keeping the client active. The effect is the same - machines are not patched when they should be, for example. So it's not an unreasonable part of the "client health" story, but from your technical point of view it is very different. So is it in or out ouf the client health story?
Someday someone will develop a nomenclature to bring clarity to this complex "client health" story. In the meantime, when someone asks about client health, be sure to ask them what problem they're trying to solve. Or if you're presenting a client health story or report, specify which problem you're addressing.
p.s. Next time we'll look at the client health strategies we use at Microsoft IT, before we get into the queries and tools that are the meat of client health management. We've talked about client coverage a couple of times, but mostly in the sense of counting. We'll have to get into the 'completeness' story someday as well.
Summary: if you have standalone SMS management points, you may use SQL Server replication to them. How do you monitor that replication? I've scripted my most useful checks, allowing quick checks on a regular basis. Typical output (for all relevant sites) is:
Admittedly this scenario won't apply to a lot of you. With modern hardware and typical SMS configurations, you probably wouldn't benefit much from standalone MPs until you get to about 5,000 clients in a site. Even then you might have the MPs get their data directly from the site server's database. But around 10,000 clients for a site you would want to use such standalone MPs with their own SQL Server database. And above that you pretty much have to use them. (Those observations are my personal opinions, not official Microsoft scalability or consulting recommendations).
You might also use standalone MPs with their own database if you want to maximize your SMS uptime. In that case, your site server can crash but your clients can continue to get policy and similar data (such as location requests) from the MPs.
What does SQL Server replication do for your standalone MPs? Well, it automatically copies all new data from relevant tables in your site server database to a seperate SQL Server database on the MPs themselves. That can happen as often as you like, but at Microsoft IT we do it every 5 minutes. Then when your MPs need any data to satisfy client requests they look at the local replicated database, rather than the site server database. That offloads a lot of work from the site server (since the requests are more frequent than changes), and give the redundancy that is important to disaster planning / failure mitigation.
Setting up SQL Server replication to the standalone MPs is a minor art in itself, and I refer you to the product documentation for those details. In a perfect world the replication would work perfectly forever once you set it up, but my experience is that isn't always the case. So monitoring replication is important, and that's where I hope I can help today.
You can monitor replication fairly simply by using SQL Server Management Studio. Its Object Explorer has a Replication node from which you can launch the Replication Monitor which will give you key details. That's fine if you're reacting to a particular rare issue, but if you want to proactively monitor all your big sites several times per day that's going to get very tedious and time consuming. So a little vbscript will help a lot:
Set oServer = CreateObject("SQLDMO.SQLServer")
oServer.LoginSecure = True 'Windows authentication
'for each of your replicated MPs:
'if that connection worked:
for each publisher in dist.DistributionPublishers
for each publication in publications
for i=1 to results.rows
server = results.GetColumnString(i, 1)
status = results.GetColumnString(i, 2)
if status=0 then statusstr="pending"
if status=1 then statusstr="starting"
if status=2 then statusstr="succeeded"
if status=3 then statusstr="running"
if status=4 then statusstr="idle"
if status=5 then statusstr="retrying"
if status=6 then statusstr="failed"
database = results.GetColumnString(i, 3)
lastaction = results.GetColumnString(i, 6)
lastactiontime = results.GetColumnString(i, 7)
'8th column is start time
'9th column is duration in seconds
delivery_rate = results.GetColumnString(i, 10)
latency = results.GetColumnString(i, 11)
transactions = results.GetColumnString(i, 12)
command_count = results.GetColumnString(i, 13)
I'm not giving you the full code there, for the usual reasons, but the parts I'm not giving are the easy bits that any SMS scripter does all the time (looping, formatting, error handling, and finding MPs from the SMS database).
IMHO that's a pretty good start toward easily monitoring replication, but I'm not satisfied. In particular, I'm all about the results - I don't care who tells me things are working well: if it's really working, show me the successfully replicated data. Which replicated data changes the most frequently? My guess is that the data in the ResPolicyMap changes most often. That table tells us which policies are applicable to each client. So if any collection changes, or a new advertisements comes out, or any similar change happens, then the ResPolicyMap will be updated. And that stuff happens often enough that it will serve this purpose. So I get my script to run the following query against the site server and its MPs. If the times are within the replication frequency (5 minutes here at MSIT), then replication must be working. (BTW, we have been burned where the above code told me that replication was fine but this query told me the data was as much as 2 weeks old, so it really is worth doing both checks).
select max(LastUpdateTime) from ResPolicyMap
I won't give sample code for how to run SQL queries from vbscript - that's a common subject covered in lots of vbscripting sources. If you don't agree, let me know (firstname.lastname@example.org) and I'll do a blog posting on that topic.
The only other thing left to check is whether replication thinks you have as many MPs at the site as SMS thinks you have. From the above code you know how many MPs replication knows about. For the number of MPs with replicated databases you can use the following code:
'get the site control file, as well described in the SMS SDK. Then:
set resources = wbemServices.ExecQuery( "select * from sms_sci_sysresuse where sitecode = '" & mpsite & "' and RoleName = 'SMS Management Point'" )
for each resource in resources
MPname = resource.itemname 'and parse out the excess bits
PropArray = resource.props
for i=0 to ubound( PropArray )-1
If PropArray(i).PropertyName="SQLServerName" and MPname<>siteserver Then
expected_MPs( numMPs,0 ) = mpName
numMPs = numMPs + 1
p.s. To use the SQLDMO object you'll need to have the SQL Server Enteprise Manager console installed on your workstation. And if you're workstation is a 64bit machine, you'll need SQL Server 2005's SP2 as well (there was a bug in the RTM version of SQL Server 2005 for SQLDMO on 64bit machines).
It's hard to do computer management without WMI. For a lot of us that means we use CIM Studio on a regular basis. There are alternatives, such as wbemtest, but my experience is that for ongoing WMI work CIM Studio is your best bet (a blog topic in its own right, I suppose). You can get CIM Studio by going to http://www.microsoft.com/downloads and getting "WMI Tools"
But what if you have a 64-bit machine? CIM Studio will complain that it doesn't recognize which WMI build you're using. It's really saying that it doesn't know what to do with CIM Studio (developed in 2002) on a 64-bit machine. There's an easy workaround - create a new shortcut:
"C:\Program Files (x86)\Internet Explorer\iexplore.exe" "C:\Program Files (x86)\WMI Tools\studio.htm"
To complete the experience, you can change the icon to:
%ProgramFiles% (x86)\WMI Tools\Studio.ico
Everyone's favorite webcaster, Wally Mead, is going to do a webcast presentation for SP3 on Tuesday, June 19th, 1:00pm PST (Pacific Daylight Savings Time / North America westcoast time). He would have liked to have done it sooner but the webcast facilities are fully booked until then.
SP3 especially adds Vista support and Asset Intelligence data collection and reporting. Wally's webcast will likely cover upgrade stuff (not really much new at all), what’s new, and really concentrate more on the AI data collection, new inventory providers, and reporting. But he's writing it as we speak, so the actual content could vary a bit.
We don't have the URL or any similar details for you just yet. Please check back at this blog posting for any updates (and I'll likely post a refresh posting close to the date as a reminder). Or at the usual webcast announcement locations (I'm afraid I don't know what those are, but they must be out there).
UPDATE 5/22, thanks to Chris Stauffer: http://myitforum.com/cs2/blogs/cstauffer/archive/2007/05/22/sms-2003-sp3-webcast.aspx
BTW, there's more than you might think that goes into these webcasts, so my hat goes off to Wally for pulling these together for everyone's benefit. For details, see http://www.webcastmagic.com/ - especially "the pitch", "casting session", and "magic time", as seen on the first page. But Wally is also on the "meet the stars" page.
In case you hadn't noticed, the SMS 2003 management pack was released today in SCOM 2007 format:
p.s. Why does an MP get to the web within hours of officially RTMing (the popup turkey thermometer popping, as I say), while SP3 took days?! Could it be because a weekend came between SP3 RTM and its web arrival? Do the web gods really value their weekends that much? Slackers!
Summary: why do we enjoy computer management? One of my biggest answers is the joy of data analysis. Computer management involves a ton of diverse data, so the opportunities for data analysis are practically unlimited. That's part of why my joy continues unabated after 12 years of SMS work, and nearly 20 years of computer management.
I talk to a lot of computer techies, and occasionally the topic of retirement comes up. For a surprising fraction of techies (surprising to me), they can hardly wait for retirement, even if they're young. I have trouble understanding that. Why are they in a field they don't enjoy? Why don't they enjoy IT? How can they look forward at most of their life and not be enthused? Obviously a big part of the problem is that they're missing out on the joy of data analysis!
My earliest SMS memory is poking around the SMS database and finding the central table. It linked to all the other tables, directly or indirectly, making SMS 1.x possible. That was my 'eureka moment' for SMS. I'm not sure I can entirely explain it, but at that moment I knew SMS was the right product for this stage of my career.
In many ways data analysis is like exploring the world. Columbus, Cook, Kirk, and a large number of other explorers could relate, at least to the general principle. You have a general idea of what the world looks like, but until you poke around enough you don't know what you're going to find. Sometimes you find obscure inconsequential details, and other times you find whole new opportunities that will rock your world. When you bulid up a good map of your world you can accomplish great things.
In the case of SMS, exploring the data can tell you a lot about what your organization is all about. Where are the computers? What do the users use? What kind of data do they create? You can see a lot in what is actually out there. Theory is one thing, your data is the truth (if properly collected and analyzed).
But SMS data is also often about SMS itself. How fast did the patches get out? How successful are the software distributions? What client versions do you have? When is your inventory data being loaded? On and on it goes.
Then there's the joy of reconciling data. If one report says one thing, and a similar report says something different, why is that? That will inspire you to better understand your data (and your reporting queries). If you can't explain the discrepencies, then your missing part of the story, and quite probably important parts of the story.
If you look at my blog, you'll see evidence throughout of my enthusiasm for data analysis - so many queries, observations about data, insights about views and tables, etc. Hopefully I'm not overdoing that side of my take on computer management. But at least now you'll understand why I hit that side of things so often.
p.s. In the spirit of full disclosure, the truth is that I enjoy analysis generally. But analysis beyond data analysis is problem analysis, and that means you have to start with data collection (for good problem analysis, IMHO). I do plenty of that but it's more work so it's not as much fun as data anlaysis. And on rare occasions you can properly analyze problems without data, but only in cases such as philosophical problems, and unfortunately that work doesn't pay well. Of course you could improperly analyze real problems without good data, but who knows what kind of results you'll end up with? So data analysis is the best form of analysis. Bring on the data!
Summary: one of the joys of working at Microsoft IT is that we get to talk to customers frequently about what we do (and yes, I am serious when I say it's a joy). That happens in various forums, on various topics, on almost a weekly basis (on average). The most popular topic over the last couple of years has been patch management.
The following links are to our latest patch management presentation. It's primarily about SMS 2003 patching, but includes the overall picture of patch deployment within Microsoft (excepting MSN, off-the-radar labs, and similar corner cases). It's from the fall of 2006, soon after we implemented ITMU2, so some things have changed but it's still fairly accurate.
- Main presentation – you have to sign in but that then leads to the live presentation, or the PowerPoint slides, or the offline viewing download
- WMA – voice-over only (podcast, I suppose you could say) (sign-in not required)
- MP3 – same thing but for devices that don’t support WMA (sign-in not required)
- PowerPoint slidedeck direct download (sign-in not required)
Of course we're dogfooding (beta testing) System Center Configuration Manager 2007 and thus learning plenty about SCCM's Software Update Management feature. We look forward to sharing those observations. We haven't set the date for when that will happen, but I imagine through the summer and the fall we'll do it in various ways.
Our presentations are not so much guidance or best practices as observations based on our experiences. People often find them useful to guide their thinking or watch for issues that might not have occurred to them previously. We like to think we've thought through the issues thoroughly (and tested our theories on our 250,000 challenging clients), so hopefully they have a lot of validity. And we work with the SMS product team very closely to investigate issues deeply. But every customer is different, so what works for us won't always work for everyone else. So I encourage you to use these materials (much like this blog) as food for thought, and then apply the ideas as appropriate to your business needs and environment.
Update: Apparently even this old one is still useful to people.
Summary: computer management is bound to be complex, as I often say, and so SMS is a very busy system. Many subsystems, at multiple levels, process a lot of data to make it all happen. Most of that processing is easy to monitor, but historically that has not been true for one of the most central processes: collection evaluation. Fortunately, SCCM 2007 fixes that problem.
Most of the SMS subsystems can be monitoring very effectively with performance counters: for example the data loader, discovery data manager, senders, etc. Or supporting subsystems such as SQL Server processing, IIS activity, CPU consumption, disk queues, etc. When all else fails you can watch the SMS inboxes - if performance is not adequate the backlogs will become obvious, sooner or later.
The exception to these rules is collection evaluation. Collections are central to software distribution, patch management, and operating system deployment. You may even use them when reporting inventory data or doing other SMS operations. Yet how do you know whether the collections are being evaluated efficiently? Prior to ConfigMgr, you had to watch the colleval.log, which is obviously not an effective ongoing strategy.
In particular, you'll find that v_Collection has a new column in SCCM: EvaluationStartTime. You already knew when the last refresh finished (even if no changes were made): LastRefreshTime. A little SQL calculation gives you the evaluation time for the collections.
select Name, datediff(ss, EvaluationStartTime, LastRefreshTime) from v_Collection order by 2 desc
It's as simple as that. But from there you can analyze the results to better manage SMS. In particular, what are your longest running collections? Today I had one that took 3337 seconds - practically an hour! That one needs some work.
But you'll also find that the same query takes different amounts of time on different days. Why is that? - because collection evaluation is very dependent on other factors. What else is locking the data the collection needs? How busy is the server? So improving collection evaluation time is not trivial. Be sure you watch your collection evaluation over multiple days to see if the same collections always have the problem (in which case the collections are probably the problem, otherwise it's probably server performance issues).
BTW, when I add up the collection evaluation time for all the collections on a mid-sized primary site (23,000 clients), the total adds up to about 3% of the day. So on that scale collection evaluation is not a big issue. But on a bigger site (such as the one that had the one hour evaluation), or a less efficient site, collection evaluation management can be worthwhile.
If you want to know the average the collection evaluation time on your site, you can use this query. The big difference is that it filters out the collections that aren't set to re-evaluate automatically (they probably didn't evaluate today, for example, so they aren't examples of today's evaluations).
select avg( datediff(ss, EvaluationStartTime, LastRefreshTime) ) from Collection_EvaluationAndCRCData colleval join collections coll on colleval.collectionID=coll.collectionID
On a related topic, you can look at when your collection evaluations occur throughout the day. If you find that you likely have collection evaluation problems due to excess server activity, then maybe you should move the collection evaluation activity to a different time of day (by changing their collectin evaluation schedules). This uses the same technique as my previous blog posting about analyzing activity over time.
select datepart(hour,EvaluationStartTime) 'hour', round(datepart(minute,EvaluationStartTime)/15.0,0) '1/4 hour' into #temp
from Collections coll join Collection_EvaluationAndCRCData colleval on coll.collectionID=colleval.collectionID
where datediff(ss, EvaluationStartTime, LastRefreshTime)>=0 and NextRefreshTime>getdate()
select hour+[1/4 hour]*.25 'hour of the day', count(*) 'evaluations' from #temp group by hour, [1/4 hour] order by 1, 2
Note: in some cases this blog post contradicts the usual cardinal rule of not querying again database tables (you should use the views instead).
Summary: SMS is highly depedent on SQL Server, so you probably watch your databases to ensure they're not getting too big for your disks. But do you know what's taking up all the space in those databases? You might be surprised.
In my biggest site (well, the biggest site for the Microsoft IT SMS team, but I'll be possessive for the moment), the biggest category of SMS data is status messages, at 54%! That's followed at quite a distance by the hardware inventory data with 13% (and 4% for hwinv history). 6% for discovery data, and 4% for collection memberships and software metering (each). The rest is are pretty small, but we do have some big scratch tables that I'll have to look at cleaning up.
How do I know all that (and more) about my databases? Using this query:
8 * Sum(CASE WHEN si.indid IN (0, 1) THEN si.reserved END) AS data_kb,
Coalesce(8 * Sum(CASE WHEN si.indid NOT IN (0, 1, 255) THEN si.reserved END), 0) AS index_kb,
Coalesce(8 * Sum(CASE WHEN si.indid IN (255) THEN si.reserved END), 0) AS blob_kb
FROM dbo.sysobjects AS so JOIN dbo.sysindexes AS si ON (si.id = so.id)
WHERE 'U' = so.type GROUP BY so.name ORDER BY so.name
Credit for that query goes to: http://www.dbforums.com/t996740.html. I found it long ago by doing an Internet search. In fairness I barely understand how it works, but it gives me good results so who am I to complain?
SMS has a lot of tables (1,842 at my biggest site), so that query returns a lots of cryptic results. How do I use the results? By massaging them with vbscript, of course. I'm checking on my options for sharing scripts, but this is not a difficult script for you to write (or your favorite vbscript expert). The trick is to total the sizes for similar tables. For example, the *_DATA tables are the hardware inventory tables. _CPY_OfferManager* tables are the collection membership tables. You'll see the patterns, and vbscript can easily add up the totals. Then you output the results in a friendly format, with percentages, and you will have a great report of what is filling up your databases.
Analyzing your database usage is most useful when dong SMS upgrades. On your small or test sites you can see where the growth is and then calculate the growth on your large production databases.
Oh, and here's the output for my oldest and largest site (client count is down from its max due to moving clients over to a SCCM hierarchy). It can give you ideas for the kind of script/report you might like to create, and I suppose the numbers themselves could be interesting for comparison purposes.
p.s. How time flies - I've been meaning to blog this one since the question came up at MMS. Can you believe that's over a month ago already?