in

myITforum.com

BDNA

Enterprise Mashups with SCCM - Continued

We'll continue this blog series on Enterprise Mashups with ConfigMgr Inventory. 

http://myitforum.com/cs2/blogs/bdna/archive/2011/02/15/enterprise-data-mashups-with-sccm.aspx 

In the first blog URL above, we explored at some basic challenges using ConfigMgr data.  In this segment, we'll look at an automated way to overcome these, and to prepare Enterprise Mushups for data-driven decision making

We all understand there is value to be mined from the treasure trove within our Microsoft SCCM repository, however, the amount of data is massive and can thwart many initiatives.  When we do sift through our inventory, the process is both manual and time consuming, and the data stale by the time it is put to use.  IT, Business Analysts, Operations, Procurement, Finance, to Risk and Compliance, in fact all lines of business, would benefit from having accurate and near real time information with which to prioritize and make decisions.

Imagine the productivity gains measured in time alone by automating a process of Normalizing your entire SCCM.  Freeing the Business Analysts and IT Project Managers of manual queries, scrubbing and data manipulation, they can spend time productively analyzing, reporting, and taking action on the data.  And by removing the significant investment in time to get SCCM data to a usable clean format, enables Project Managers to take more frequent snapshots of this data. Having accurate and current views of the data also empowers IT Project Managers to make better more informed decisions with current views.   

 

BDNA Normalize does just this. It provides an automated mechanism to mine the millions of records of SCCM data generated by your firm’s assets. The cleansed normalized results are then published and actionable. As part of the process BDNA Normalize also enriches the SCCM data with non-discoverable information.  Examples of non-discoverable data are software end-of-life and asset obsolescence dates; to hardware energy ratings and actual physical dimensions statistics. During the normalization process, BDNA Normalize will also handle updating any manufacturer names to the current market name as the result of acquisitions or mergers in the marketplace (e.g., Oracle’s acquisition of Sun will be reflected  in the normalized data – Oracle Java not Sun Java). For a large company a SCCM normalization task with BDNA Normalize takes minutes to run versus weeks/months with the manual approach.

 

Once the SCCM data has been cleansed and normalized the fun of mashups can begin.  The cleansed SCCM data enables report writers or mashup developers to define an “entity”. These entities can be defined at a broad level such as a Software Publisher (e.g., SAP or Microsoft) or Hardware Manufacturer (e.g., Dell) or they can be defined at a more granular level such as down to the hardware make and model (Dell Latitude D620) or a specific software product (SAP Crystal Reports v9.0).  The granularity of the entity defined is determined by the business goal of the mashup.  Having Normalized data enables one to confidently know that the counts for an entity are accurate (e.g., the total number of Adobe Acrobat 7 Professional in our environment = 2,500).  These software or hardware entities (aka objects) can have fields associated with them. For those with a programming background think object oriented programming. You can have a Software Manufacturer Object with fields on it sourced from the normalized data like array of products, total number of installations, software categorization, etc.  These objects are available for reporting via your reporting tool of choice (e.g., BIRT, JasperSoft, Crystal, etc) or via your web mashup (e.g., JSON object). 

 

Now in terms of a mashup it is important to delineate where the “mashing” can occur.  Typically the term mashup refers to joining this data on the web browser via client side logic. This is due to the fact that the data for the two disparate systems (one likely an external system) are locked away in separate databases and surfaced via some type of web service API.  For example, you may develop a web mashup that looks up competing merchant prices from external sites for your current inventory of software and hardware for potential price comparisons in an easy to view grid (e.g., lookups from CNET’s API, etc for merchant pricing).

 

However, working within your corporate environment (i.e., an enterprise mashup) you are able to take a data feed from another internal system and load it to the same database that houses the normalized and cleansed SCCM data.  The data can then be joined at the database level via sql joins, materialized views, new tables, etc. Since the SCCM data has been cleansed and normalized it is in a state where it can be easily joined.  A key rule to normalization is that there is only one true entry (i.e., no duplicates).  So within the table of software publishers in your environment there is only one entry for Adobe. You don’t have to worry about variations such as Adobe Inc., Adobe Incorporated, etc.  The normalization process has taken care of this. This normalization makes the joining process much simpler as you are only joining one to one. As a developer or report writer you are not dealing with messy programming logic to handle the various deviations.  For example, you could develop a quick mashup that looks through the current list of normalized software products for any products flagged as prohibited software (e.g., video games, tv/media programs, peer-to-peer, etc).

 

 

...In the next segment, we'll look at a customer putting Enterprise SCCM Mashups to use to solve problems...

 

by John Pusey, BDNA T.A.M.

 

Comments

No Comments
Copyright - www.myITforum.com, Inc. - 2010 All Rights reserved.
Powered by Community Server (Commercial Edition), by Telligent Systems