.\Matthew Long

{An unsorted collection of thoughts}

Posts Tagged ‘SCOM 2007’

SCOM – Updated Visual Studio Authoring Extensions released

Posted by Matthew on October 21, 2013

Hi Guys,

A new version of the System Center Visual Studio Authoring Extensions has now been released, featuring many bugfixes, improvements and enhancements.  One of these that I’ve long been awaiting is..

Visual Studio 2012 and 2013 are now supported!

This means all of the new VS 2012/3 features that you’ve been waiting to use are now go.  I know items such as previewing files and source control enhancements

Download link : http://www.microsoft.com/en-us/download/details.aspx?id=30169

You’ll need to uninstall the previous version if you’ve already got it installed, as the installer won’t perform an upgrade (at this time, I also uninstalled Visual Studio 2010, as I only had that version installed for MP Authoring, everything else I’m doing has since moved on to more recent versions of VS).

Some of the notable changes are (in no particular order):

  • Support for Visual Studio 2012 and 2013
  • Language packs other than ENU are now built correctly,
  • Picker dialog boxes now traverse Project references properly, as does the Find All references tool.
  • No more access denied error messages when trying to build a 2012 MP that has been auto deployed to a management group.
  • Import wizard can now resolve MP references inside MP bundles
  • Snippet Editor no longer displays duplicate columns when an ID has been used multiple times in the snippet template.
  • Monitor templates now set Auto resolve to “True” by default and Severity to “MatchMonitorHealth”
  • MP Context Parameter replacements can now be used in Module parameters that have non-string data types.
  • MP Simulator can now simulate Agent Task workflows
  • MPBA “Resolve” option now works correctly with generated fragment files

They’ve also changed the Project templates to make it a bit more explicit which versions of Operations Manager each project type supports, and added 2012 SP1 and 2012 R2 explicitly as new project types.  Sadly still no xml  templates provided for datasource, monitortype, probe or condition detection modules.

VSAE Project Types

Full change list: http://social.technet.microsoft.com/wiki/contents/articles/20357.whats-new-in-visual-studio-authoring-extensions-2013.aspx

Remaining Issues:

  • MPBA Tool still doesn’t resolve Alert Strings where the default Language is not the workstation language (switch your machines language and all the missing alert string messages will disappear)
  • No intellisense support when configuring module parameters outside of Template groups (painful when creating custom modules).
  • Go To Definition doesn’t function if you are already previewing an existing item (essentially only works when called from fragments in your projects).
  • Informational schema errors are still logged when you configure module parameters outside of a template group.
  • When creating template snippets, intellisense still only provides snippet specific options, which can make authoring snippets challenging.

Happy Authoring!

Posted in Computing | Tagged: , , , , , , | 5 Comments »

Switch a Visual Studio Authoring Extension MP project between SCOM 2007 and SCOM 2012

Posted by Matthew on February 8, 2013

A couple of times now I’ve started a new MP project in Visual Studio using the Operations Manager Visual Studio Authoring Extensions, and about 45 minutes into my development realized I’ve created the wrong project type.  There have also been a few occasions where requirements change and it makes more sense to change the project from a Core Monitoring (2007) management pack into a 2012 “add-on” pack.

It’s actually quite a simple change to make, providing you know which files to edit.

Project File

  1. Open the .mpproj file that represents your management pack project (NOT the solution file).
  2. Locate the MPFrameworkVersion element in the first PropertyGroup section, and modify it according to the version you want the project to build:
    1. For 2012, use <MpFrameworkVersion>v7.0</MpFrameworkVersion>
    2. For 2007, use <MpFrameworkVersion>v6.1</MpFrameworkVersion>
  3. If downgrading from 2012 to 2007, under the ItemGroup element containing the MP Reference elements, remove any references to MPs that don’t exist in 2007.
  4. Save the file.

Management Pack Fragments

Next, you’ll need to update the SchemaVersion attribute of the ManagementPackFragment element at the start of every .mpx file in your project.  The good news is that you can just use Visual Studio’s find and replace option to replace this in all files in the solution simultaneously!

  1. Open the Project/solution in visual studio.
  2. Open one of the .mpx files, and hit Ctrl+H to open the “Find and Replace” window.
  3. To go from 2007 to 2012, in the “find what” box, enter <ManagementPackFragment SchemaVersion=”1.0″ and in “replace with” enter <ManagementPackFragment SchemaVersion=”2.0″
  4. Otherwise, do the reverse and in the “find what” box, enter <ManagementPackFragment SchemaVersion=”2.0″and in “replace with” enter <ManagementPackFragment SchemaVersion=”1.0″
  5. Make sure “Look in” is set to Current Project.
  6. You can now either click “Find Next” and “Replace” repeatedly if you aren’t comfortable letting VS just replace everything, or hit “Replace All

Save all files in your project, and attempt a build.  If you get any build errors relating to Management pack references or Schema Versions, just look for the reference/file that you missed and update the values.

Needless to say, you should always think carefully before you do this, and thoroughly test the MP to ensure that there aren’t any side effects from now using earlier/later versions of the default SCOM management packs.  Going from 2007 -> 2012 is safer as 2012 is backwards compatible and will support your previous module configurations.

Hope that helps!

Posted in Computing | Tagged: , , , , , | 1 Comment »

Deleting a SCOM MP which the Microsoft.SystemCenter.SecureReferenceOverride MP depends upon

Posted by Matthew on December 14, 2012

If you’ve ever imported a management pack which contained a Run As profile into SCOM, you will know the pain that awaits you if you ever need to delete it (most commonly when the latest version of the MP doesn’t support an upgrade via import).

The most discussed option I’ve seen for dealing with this is to:

  1. Delete the offending Run As Account(s) from the Run as Profile.
  2. Export the Microsoft.SystemCenter.SecureReferenceOverride MP
  3. Remove the reference that gets left behind when you delete a Run As Profile configuration from the raw xml.
  4. Increment the version number (again in the xml)
  5. Reimport it.

However, there is another way that doesn’t rely on you having to import/export or having to touch any xml, just a bit of Powershell!  The below is for 2012, but the same principle applies for 2007, just use the appropriate cmdlets/methods.

  1. Open a powershell session with the Operations Manager module/snappin loaded.
  2. Type: $MP = Get-SCOMManagementpack -Name Microsoft.SystemCenter.SecureReferenceOverride
  3. Now we can view the referenced management packs by typing $MP.References
  4. From the list of items in the Key column, note down the alias of your MP you wish to delete.  If you are having trouble finding it, the Value column will list the full ID of the MP.
  5. Now that we know the MP alias, we can remove it from the Secure Reference MP by typing $MP.References.Remove(“yourMPAliasGoesHere“)
  6. Now we can verify the MP is valid by entering $MP.Verify()  to ensure there are no orphaned overrides, etc.
  7. Finally, we can save our changes by typing: $MP.AcceptChanges()

powershell_references

If you don’t follow step 7, you won’t actually commit your changes to the MP.  Once this is done, give the SCOM management group a chance to catch up (you might have to wait a minute and then refresh your console).  When you now check the Dependencies tab of the MP you want to delete, you’ll see that the SecureReferenceOverride MP is no longer listed and you’ll be able to remove your MP.

Dependencies

 

Hope that helps!

Posted in Computing | Tagged: , , , , | 7 Comments »

SCOM : Revisiting the DayTimeExpression option of System.ExpressionFilter

Posted by Matthew on August 17, 2012

You may remember a few weeks ago I did an article on the flexibility and power of the System.ExpressionFilter. In that post I commented that I wouldn’t go into any depth on the DayTimeExpression expression type as you could accomplish most of the day/time comparison tasks with a System.ScheduleFilter instead.  Well, no sooner than writing that, I found myself authoring a workflow where that was not the case.

The customer was looking for a three state unit monitor, that had some complex health behavior:

  1. If the result of a scheduled Job was successful, switch to a Healthy state.
  2. If The result of the job is a Failure, switch to a Critical state.
  3. After 15 Minutes since the job has started, if it is not Successful or Failed, switch to a Warning State.
    1. After 30 minutes, if it has still not succeeded, go into a critical State.

Normally this kind of complex logic would be included as part of a custom script, however in this case we were using a built-in module to perform the query of our Job status, so it seemed very inefficient to then have to go into a script just to calculate the time differential (especially if due to the return code, the time calculation is irrelevant).

Why couldn’t we use a System.ScheduleFilter?

The normal approach when doing time comparisons is to use a System.ScheduleFilter to block the workflow from posting items to the appropriate health states in a MonitorType.  However, here we had a combination approach where we needed to use boolean AND and OR operators, which is something that the ScheduleFilter cannot provide, and the SCOM workflow engine only allows a single workflow branch to lead to a given Health State.

Our module didn’t return the current time as an additional property, so in order to get the current date I decided to just use the timestamp found on every Dataitem in a SCOM workflow.  This required a little bit of XPath knowledge however, as nearly all the XPath Query examples I’ve seen documented with SCOM are for accessing the child properties of a DataItem, not its own attributes.

So without further ado here are the Expressions I eventually came up with..

Healthy Expression

    <RegExExpression>
      <ValueExpression>
        <XPathQuery Type="String">//*[local-name()="StdOut"]
      </ValueExpression>
      <Operator>MatchesRegularExpression</Operator>
      <Pattern>^(.*SU)$</Pattern>
    </RegExExpression>

Nothing groundbreaking here, if at any time the result code matches the regular expression “SU” then we know our job succeeded (those of you who have done cross-platform monitoring with SCOM may recognize the XPath Query i’m using, however that’s not really the focus of this article).

Warning Filter

    <And>
      <Expression>
        <RegExExpression>
          <ValueExpression>
            <XPathQuery Type="String">//*[local-name()="StdOut"]
          </ValueExpression>
          <Operator>DoesNotMatchRegularExpression</Operator>
          <Pattern>^(.*SU|.*FA|[\s]*[\s]*)$</Pattern>
        </RegExExpression>
      </Expression>
      <Expression>
        <DayTimeExpression>
          <ValueExpression>
            <XPathQuery Type="DateTime">./@time</ValueExpression>
          <StartTime>79200
          <EndTime>80100
          <Days>62</Days>
          <InRange>true
        </DayTimeExpression>
      </Expression>
    </And>

The first part of our Warning expression (lines 2-9) is just checking that the Job status code is not one which instantly pushes us into another health state (remember, a return code of Failure is an automatic critical health state, regardless of time).  Lines 12-20 are what we are interested in here.  We’ve specified that we want to do a DayTime comparison, and we are going to compare the “./@time” XPath query against a time range.  I’ll come back to the XPath in a second, first let me explain the rest of the parameters.

  • StartTime is the start of the time range, specified in Seconds from Midnight.
  • EndTime is the end of the time range, specified in Seconds from Midnight.
  • Days is the usual Days of the week mask used by SCOM.  To specify which days are in the range, simply add up the corresponding values from the table below (so Mon through Friday is 62).
    • Sunday = 1
    • Monday = 2
    • Tuesday = 4
    • Wednesday = 8
    • Thursday = 16
    • Friday = 32
    • Saturday = 64
  • InRange is a boolean operator that specifies if this expression is TRUE if the input time is inside, or outside the time range specified.

So what i’ve done here is specified that the warning filter will match only for the 15-30 minute period after the job has started (remember this is a scheduled job, so it’s the same time each day), and only if the status code is not SU (Healthy) or FA (Critical).

The XPath Query on line 14 of “./@time” is a little weird, and is to do with the implementation of XPath in the ExpressionFilter module.  Essentially it’s already scoped to look at the elements within a workflow Data item, so we have to explicitly go back up the tree and say we want the Time attribute of the DataItem itself.  The value you use to perform your date time calculation must be in the DateTime format (2012-08-17T14:23:16.291Z) for this to succeed.  Thankfully the Time value on DataItems always is.

Critical Filter

    <Or>
      <Expression>
        <RegExExpression>
          <ValueExpression>
            //*[local-name()="StdOut"]
          </ValueExpression>
          <Operator>MatchesRegularExpression</Operator>
          <Pattern>^(.*FA|[\s]*[\s]*)$</Pattern>
        </RegExExpression>
      </Expression>
      <Expression>
        <And>
          <Expression>
            <RegExExpression>
              <ValueExpression>
                //*[local-name()="StdOut"]
              </ValueExpression>
              <Operator>DoesNotMatchRegularExpression</Operator>
              <Pattern>^(.*SU|[\s]*[\s]*)$</Pattern>
            </RegExExpression>
          </Expression>
          <Expression>
            <DayTimeExpression>
              <ValueExpression>
                <XPathQuery Type="DateTime">./@time</XPathQuery>
              </ValueExpression>
              <StartTime>78300</StartTime>
              <EndTime>80100</EndTime>
              <Days>62</Days>
              <InRange>false</InRange>
            </DayTimeExpression>
          </Expression>
        </And>
      </Expression>
    </Or>

And finally, we have the Critical health state Expression filter.  This uses an OR condition to short-circuit and allow us to instantly go to a critical health state if the job result is a failure without having to check the time.  Otherwise, we carry on and evaluate the AND expression and make sure that the job result isn’t a Success code and we are now outside our 30 minute window since the job started.  Note that the InRange parameter on line 30 is now false, so that the expression will only evaluate to true if the current time is outside our time window of 0-30 minutes since the job started.

And there we have it!  Unfortunately the SCOM 2007 Authoring console doesn’t allow you to use the wizard when trying to use DayTime/Exists expressions, so you’ll need to hit the “edit” button and get your hands dirty with XML yourself.  My suggestion would be to create your logic using the wizard with a placeholder line in place where you want your DayTimeExpression, and then to edit it by hand to include that line afterwards.  Just don’t try to open your ExpressionFilter in the wizard using the “configure” button afterwards, as it will break your expression.  You’ll have to keep using the edit button and editing it by hand.

As always, I hope that helped someone out, and feel free to post a comment if I haven’t made anything clear or you have practical example you’d like to work through.

Posted in Computing | Tagged: , , , , | 1 Comment »

The SCOM Unsung Hero – Using the System.ExpressionFilter Module

Posted by Matthew on July 3, 2012

I’ve decided to write a blogpost as tribute to the unsung hero of Operations Manager, the one module that gets used in virtually every workflow but is rarely the focus of attention.  Without this module cookdown would be mostly impossible and whether you are creating your own modules or using the Wizards in the SCOM console/ Authoring Console / Visio Extension, it’s always there to assist you.  I’m talking of course about the System.ExpressionFilter.

What is it?

The System.ExpressionFilter is a condition detection module and sibling to the System.LogicalSet.ExpressionFilter.  It’s function is to examine items in the operations manager workflow and either pass them on or remove (drop) them from the workflow.  If no items matched the filter at all, the workflow terminates.

In only has a single configuration parameter, but it’s a very powerful one, as it accepts the ExpressionType configuration.  In reality most of this article will be talking about the syntax of ExpressionType.

It’s also a very lightweight module, and should be used whenever you need to do any kind of determination or filtering.  Whenever you are using a service monitor, event log reader, SNMP probe, the parameters you are filling in are nearly all being sent to this module not the data source!

When should you use it?

  • 90% of the time, if you want to implement cookdown for your workflow, you’ll be using this module.
  • You want to add further filtering onto an existing rule in an unsealed management pack.
  • You want to perform filtering of any kind of data.
  • You are implementing a MonitorType (not the same thing as a monitor)

Configuration

The System.ExpressionFilter only takes a single parameter, of type ExpressionType.  This is an inbuilt data type in SCOM that allows you to specify some kind of evaluation criteria that operations manager will run on every item sent to the module.  It should be noted that each item will be evaluated individually (if you need to do them as a linked set, see the System.LogicalSet.ExpressionFilter).

Expression filters are very complex types.  They support nested expressions using the And and Or group constructions, and you also have access to NOT.  Below i’ll give you a sample of the type you are going to use 75% of the time..

 SimpleExpression – Compare output of PropertyBagScript to value

<Expression>
       <SimpleExpression>
              <ValueExpression>
                     <XPathQuery Type="String">Property[@Name='Status']</XPathQuery>
              </ValueExpression>
              <Operator>Equal</Operator>
              <ValueExpression>
                     <Value Type="String">Healthy</Value>
              </ValueExpression>
       </SimpleExpression>
</Expression>

This is the most common filter you’ll use in SCOM. It’s purpose is to compare the output of a module (in this case, a PropertyBagScript value called “Status”) with a value. This can either be a static value, or one passed in to the workflow as part of a $Config/$ parameter.

We start off opening with an <Expression> tag, which is always the starting and end tag for each evaluation.  Then we’ve stated on line 2 that we want to use a SimpleExpression, which just does a straight comparison between two items (the first ValueExpression and Second ValueExpression).  The valid operators for use with a SimpleExpression are:

  • Equal
  • NotEqual
  • Greater
  • Less
  • GreaterEqual
  • LessEqual

Note that when specifying the operators they are case-sensitive, so they need to be entered exactly as above.  Finally our ValueExpressions (the left and right side of the comparison) are either of type Value or XPathQuery.  You use Value when using either static values or $Config/$ or $Target/$ parameters, and XPathQuery when you want to examine the output of the previous module.

Finally you’ll note that both Value and XPathQuery have a Type attribute – SCOM will attempt to cast the data into that type before performing the query.  So if you are comparing two numbers make sure you have the type set to Integer, otherwise it will attempt to calculate if the ‘letter’ 3 is greater than ’86’, which probably isn’t your intent.  The available types are:

  • Boolean
  • Integer
  • UnsignedInteger
  • Double
  • Duration
  • DateTime
  • String

The SCOM 2007 Authoring console will by default always set the type to “String”, so keep an eye on that.  Also, if the type conversion fails, SCOM is going to throw an error into the event log and the item will not be processed.

 Logical Operators – And Or and NOT

You can group and reverse the result of expressions using the <And>, <Or> and <Not> expression elements.  How they are implemented is a wrapper for your <Expression></Expression> tag that themselves are expressions!  Sounds complicated, but with an example it becomes much clearer:

<Expression>
     <And>
          <Expression>
               <!-- First expression here -->
          </Expression>
          <Expression>
               <!-- Second expression here -->
          </Expression>
     </And>
</Expression>

So above we have two expressions that most both evaluate to true in order for the whole outer expression to be true.  The construct is the same for <Or> and <Not>, and you can even nest groups in groups for truly powerful expressions!  <Not> may only contain a single <Expression> (that of course, could be a group!), but <And> and <Or> can contain two or more expressions if you need to group on multiple items.

One important thing to note is that groups support short circuiting.  What this means is that if we examine one expression in an And/Or group and we can deduce from the first expression that the whole thing will be true or false (perhaps we are using And and the first item is False) then SCOM won’t bother to evaluate the second expression, saving time and performance.  So nest away!

Exists – Does my data item contain a property?

Much like a type conversion failure, if an XPathQuery value (as part of a SimpleExpression) doesn’t resolve to anything, say because that data item doesn’t contain an expected property, then the Expression will fail and that item will be dropped.  So if you are dealing with a property that doesn’t always show up (regardless of if it has a value, SCOM can deal with empty/null properties) you’d be wise to use the <Exists> expression.  It’s also useful if you don’t care about the value of a property, merely if it exists or not.

<Expression>
               <Exists>
                      <ValueExpression>
                             <XPathQuery Type="Integer">Params/Param[1]</XPathQuery>
                      </ValueExpression>
               </Exists>
        </Expression>

Here we are checking to see if an event log entry has at least 1 parameter.  You can also use <Value> instead of XPathQuery if you wanted to check to see if a $Config/$ parameter exists so you know if an optional parameter on your module has been specified or not.

If you need to check the result of a value that may or may not exist, you’ll want to take advantage of the short circuiting of thegroup by combining an exists check with your value check.  Make sure the exists expression is first in the group, and that way if the property doesn’t exist SCOM won’t bother trying to read the property (which, as stated above will cause the module to fail).  I’ve included an example of this below!

<Expression>
     <And>
        <Expression>
               <Exists>
                      <ValueExpression>
                             <XPathQuery Type="Integer">Params/Param[1]</XPathQuery>
                      </ValueExpression>
               </Exists>
        </Expression>
        <Expression>
               <SimpleExpression>
                      <ValueExpression>
                             <XPathQuery Type="Integer">Params/Param[1]</XPathQuery>
                      </ValueExpression>
                      <Operator>Less</Operator>
                      <ValueExpression>
                             <Value Type="Integer">Params/Params[1]</Value>
                      </ValueExpression>
               </SimpleExpression>
        </Expression>
     </And>
</Expression>

Regular Expressions

If you want to do powerful (or simple!) regular expression comparisons, then the ExpressionFilter has got you covered.  I’m not going to go into a huge amount of depth on this one, because by now you should be getting an idea of how this works.  I’ll just show you the syntax and then list the regex pattern styles you can use.

<Expression>
       <RegExExpression>
              <ValueExpression>
                     <XPathQuery Type="String">EventPublisher</XPathQuery>
              </ValueExpression>
              <Operator>ContainsSubstring</Operator>
              <Pattern>Microsoft</Pattern>
       </RegExExpression>
</Expression>

ValueExpression is the same as with a SimpleExpression, so you can compare against incoming data items on the workflow or input parameters.  Operator allows you to specify what type of matching you’d like to perform:

  • MatchesWildcard– Simple wildcard matching using the below wildcards
    • # – Matches 0-9
    • ?  – Any single character
    • * – any sequence of characters
    • \ – escapes any of the above
  • ContainsSubstring – Standard wildcard containment, if this exists anywhere in the string (implemented as ‘^.*pattern.*$’)
  • MatchesRegularExpression – Full regular expression via .Net (Note this is not the same as Group Calculation modules, which use Perl).
  • DoesNotMatchWildcard – Inverse of MatchesWildcard.
  • DoesNotContainSubstring – Inverse of ContainsSubstring
  • DoesNotMatchRegularExpression – Inverse of DoesNotMatchRegularExpression

Finally, Pattern allows you to specify your regular expression.  Note that you don’t need to wrap this in “” or ”.  Obviously you can nest these in groups if you need to perform multiple regular expressions or

Oh, and if you have any questions on Regular expressions, ask Pete Zerger.  He loves regular expressions (you can tell him I sent you ;))!

DayTimeExpression

Finally we have , which is used to determine if a datetime is inside or outside a range.  This one is less used, as we have another built in module (System.ScheduleFilter) which we can use for this kind of comparison that is a bit more powerful, and can use the current time of the workflow, rather than having to get that value from your data item.  It only allows for Day (Sun-Saturday) and time comparisons.  There’s no ability to specify exceptions or different windows for each day either, something the ScheduleFilter does implement.

I’m not going to detail into it here, but you can find documentation for it on MSDN at the following link.

Example Scenarios

Essentially, any time you want to filter or compare a value you can use this module!  Normally you’ll be using it to either manage output from a datasource or further scope a rule so that it only alerts when it meets your extra criteria.

The other time you’ll commonly use it is when implementing your own MonitorType.  You’ll add one System.ExpressionFilter for each health state the monitortype provides, and then set the filters up so that they use mutually exclusive values to determine what health state your system is in.  I won’t drag this post out any further with examples however, as there are plenty on the web of this already and they are always quite specific to the scenario.

Links

MSDN documentation – http://msdn.microsoft.com/en-us/library/ee692979.aspx

Hope this proved helpful, and as always if you have any specific questions feel free to post a comment with what you need and i’ll see what I can do!

(Sorry Pete!)

Posted in Computing | Tagged: , , , , | 4 Comments »

Query a database without scripting as part of SCOM monitoring – The System.OLEDBProbe module

Posted by Matthew on June 23, 2012

A fairly common monitoring scenario is the need to query a database somewhere (normally SQL, but as long as you have a relevant OLEDB driver on your agents, whatever you need!) and based on the results of the query trigger some kind of workflow. I’ve seen it’s used with monitors, Alert and collection rules and even Discoveries!

Obviously you can do this via script, but perhaps you have a simple query and no need to do any posts query processing (often this can be done as part of your query anyway). In these cases, you can also use a built in module called the System.OLEDBProbe to query the DB and do the lifting for you!

What is it

The System. Module is a built in probe module that will use a OLEDB provider/driver on the system to make a database query from the hosting agent. The database, query and other settings are defined via probe configuration and do not need to be hard coded into the MP (though obviously the query usually is). The query can be modified using context parameter replacement prior to execution so you can dynamically insert information into it if need be. It supports integrated and also manually specified credentials, usually via Run As Profiles.

It also has the nifty ability to retrieve the database settings from specified registry keys, which can avoid the need to go out and discover those pieces of information. This makes it quite suitable for attaching onto existing classes from other management packs.

When you should use it

  • You know in advance which columns you need to access.
  • You know how to implement your own module.
  • You have a suitable OLEDB provider on your agent (common windows ones included by default)
  • You don’t need to perform complex processing on each returned row.

Configuration

Required

  • ConnectionString – The connection string you wish to use to connect to the database.  On windows 2003 or later, this is encrypted by the module.  if you are using Integrated security, you do not need to specify credentials as long as you are using a run as profile with this module (but make sure you flag the connection as using integrated security!).
  • Query – The query you wish to run against the database. Supports context parameter replacement, so you can use $Config/$ variables etc in your query.

Optional

  • GetValue – (true/false) Whether the results of the query should be returned or not (set to false if you just want to connect to the DB, and you don’t care about the results of the query).
  • IncludeOriginalItem – (true/false) Determines if the resulting data item(s) will contain the item that originally triggered this module.  Note that the data is returned as CData, so you won’t be able to perform XPath queries directly against it.
  • OneRowPerItem – (true/false) Should all resulting data be returned in a single data item, or 1 data item returned for each row in the query results?  Normally setting this to true is more useful, as you’ll often want a condition detection to process each row individually, and you won’t know the order (or number) of resulting rows.
  • DatabaseNameRegLocation – Registry key where we can find the database name.  Must be under the HKLM hive.
  • DatabaseServerNameRegLocation – Registry key where we can find the database server name (and instance, if required).  Must also be under the HKLM hive.

SCOM 2007 R2 and above only

  • QueryTimeout – (Integer) Optional parameter that allows you to perform a query timeout.
  • GetFetchTime – (true/false) Optional parameter that allows you to specify that the resulting data item(s) should contain the fetch time for the query.

Personally, I tend to omit the R2 only parameters as they usually do not add much to the workflow and will restrict your environment.  Obviously if you are making this MP for inhouse resources you are free to implement against whatever version of SCOM you have!

An important parameter is the OneRowPerItem.  If set to false when you get back data the data item will look like the below snippit (i’ve omitted the other elements to save space)


<RowLength></RowLength>
    <Columns>
    <!-- Data for first row returned -->
       <Column>Data in first column</Column>
       <Column>Data in Second column.</Column>
    </Columns>
    <Columns>
    <!-- Data for Second row returned -->
       <Column>Data in first column</Column>
       <Column>Data in Second column.</Column>
    </Columns>

This can make processing the results in further modules a pain, since your XPath Query is going to have to specify which row and column specifically you want to access. If you instead set OneRowPerItem to true then you’ll get multiple return items and can filter them using an Expression filter with a simple syntax such as $Data/Columns/Column[1]$ You may also wish to filter on the RowLength property to establish if any rows were returned. Remember that the module will return a data item if it succeeded to connect but doesn’t have rights to query, so check that data was returned before you try to do something with it!

Example scenarios

Normally if I’m going to use an OleDBProbe to access a database repeatedly I’ll create my own probe module that sets up the settings I’m going to need and is already set to use my MP’s run as profile for DB access.  That way I don’t have to keep specifying it over and over again.  Below is a sample where I’ve done this, and configured everything other than my query to pass in for a SQL database probe.  Now all my monitors and rules that make use of this know where to locate the DB and what other query options to use automatically (along with credentials).

<ProbeActionModuleType ID="DBProbe.Library.Probe.DatabaseOledbQuery" Accessibility="Public"   RunAs="DbProbe.Library.SecureReference.Database" Batching="false" PassThrough="false">
    <Configuration>
<xsd:element minOccurs="1" name="Query" type="xsd:string" />
<xsd:element minOccurs="1" name="OneRowPerItem" type="xsd:boolean" />
    </Configuration>
<ModuleImplementation Isolation="Any">
        <Composite>
            <MemberModules>
<ProbeAction ID="PassThru" TypeID="System!System.PassThroughProbe" />
<ProbeAction ID="OledbProbe" TypeID="System!System.OleDbProbe">
                    <ConnectionString>Provider=SQLOLEDB;Integrated Security=SSPI </ConnectionString>
$Config/Query$
                    <GetValue>true</GetValue>
                    <IncludeOriginalItem>false</IncludeOriginalItem>
$Config/OneRowPerItem$
                    <DatabaseNameRegLocation>SOFTWARE\MyRegKey\Database\DatabaseName</DatabaseNameRegLocation>
                    <DatabaseServerNameRegLocation>SOFTWARE\MyRegKey\Database\DatabaseServerName</DatabaseServerNameRegLocation>
ProbeAction>
            </MemberModules>
            <Composition>
                <Node ID="OledbProbe">
                    <Node ID="PassThru" />
                </Node>
            </Composition>
        </Composite>
ModuleImplementation>
    <OutputType>System!System.OleDbData</OutputType>
    <TriggerOnly>true</TriggerOnly>
ProbeActionModuleType>

Here I’ve done the same thing, only without using registry keys to specify the location of my DB.  Normally I’d pass the DB details from my targeted class as I’ll have some property that has been discovered defining where the database is.

<ProbeActionModuleType ID="DBProbe.Library.Probe.DatabaseOledbQuery" Accessibility="Public"  RunAs="DbProbe.Library.SecureReference.Database" Batching="false" PassThrough="false">
    <Configuration>
<xsd:element minOccurs="1" name="DatabaseServer" type="xsd:string" />
DatabaseName" type="xsd:string" />
        <xsd:element minOccurs="1" name="Query" type="xsd:string" />
        <xsd:element minOccurs="1" name="OneRowPerItem" type="xsd:boolean" />
    </Configuration>
    <ModuleImplementation Isolation="Any">
        <Composite>
            <MemberModules>
<ProbeAction ID="PassThru" TypeID="System!System.PassThroughProbe" />
            <ProbeAction ID="OledbProbe" TypeID="System!System.OleDbProbe">
Provider=SQLOLEDB;Server=$Config/DatabaseServer$;Database=$Config/DatabaseName$;Integrated Security=SSPI
                <Query>$Config/Query$</Query>
                <GetValue>true</GetValue>
                <IncludeOriginalItem>false</IncludeOriginalItem>
                <OneRowPerItem>$Config/OneRowPerItem$</OneRowPerItem>
            </ProbeAction>
            </MemberModules>
            <Composition>
                <Node ID="OledbProbe">
                    <Node ID="PassThru" />
                </Node>
            </Composition>
        </Composite>
    </ModuleImplementation>
    <OutputType>System!System.OleDbData</OutputType>
    <TriggerOnly>true</TriggerOnly>
</ProbeActionModuleType>

Simple/Specified Authentication

If you don’t (or can’t) want to use Integrated security, you can pass credentials using simple authentication and a run as profile. DO NOT hard code the credentials, these are now stored in plain text and readable. The run as profile creds are encrypted and the connection string is encrypted across the wire, the MP isn’t!

The syntax for this is (depending on your Ole provider, here it’s SQL) shown below.  Obviously replace the text in italics with your values.

Provider=SQLOLEDB;Server=ServerName;Database=databaseName; User Id=$RunAs[Name=”RunAsIdentifierGoesHere“]/UserName$;Password=$RunAs[Name=”RunAsIdentiferGoesHere“]/Password$

Scenario 1 – Monitoring

Fairly simple one this, you want to monitor a database for a certain condition.  Perhaps you are getting the result of a stored procedure, checking the number of rows in a table (by using the databases query langauge) or checking rows for a certain value (perhaps error logs?).  Once queried, you pass the data items onto an System.ExpressionFilter module to filter for your desired criteria and alert as appropriate.

Scenario 2 – Collection

Another fairly common one, do the same thing as above as part of an event collection or performance collection rule.  This could even be ignoring the data and just checking how long it took the query to run, via the InitializationTime, OpenTime, ExecutionTime and FetchTime (if you’re R2 or 2012) properties of the output data.  Following your System.OleDBProbe module you’ll usually use one of the mapper condition detection modules to generate event or performance data (these are quite nicely documented around the web and on MSDN.  Normally done with property bags, but the principle is the same).

Scenario 3 – Discovery

Yep, you can even do discovery from this.  Your table might contain pointers to apps in a grid or distributed system, groups you want to discover and monitor or subprocesses you want to go and do further monitoring on.  This is the most complex scenario and as a tip, only really attempt this if you are looking to discover a single object out of the process per discovery.  Otherwise, use a script to process each result item in turn using ADO or some other API.

Links

MSDN Documentation – http://msdn.microsoft.com/en-us/library/ff472339.aspx

Sample of the output this module returns – http://msdn.microsoft.com/en-us/library/ee533760.aspx

Hopefully that’s given you some food for thought, and as always if you have a specific example you’d like me to walk you through, just post a comment and i’ll see what I can do!

Posted in Computing | Tagged: , , , , | 3 Comments »

How I added hundreds of Service Discoveries and Monitors into a SCOM MP in 20 minutes

Posted by Matthew on June 23, 2012

Recently I was presented by a customer with a huge list of windows services that needed to be discovered and monitored in Operations manager as part of an engagement. Many of these services were in house/custom services or ones for which no management pack currently exists.

The normal approach would of course be to put together grouped classes and discoveries that make sense for each application, however in this case time and project budget were against us, but more over the customer simply didn’t have the information (or need) to do anything other than simple up/down monitoring on each service.

So armed with a CSV file, the Visual Studio MP Authoring Extensions and a short amount of time, I set out to complete what would normally be a huge amount of work in a day.

The Solution – Snippets and Template groups

The Visual Studio MP authoring extensions have two features that used in combination allow you to take a template MP entity that you define (called a Snippit), and then by replacing placeholders with values from a table automatically generate concrete versions of your template when the MP is built (Template groups). The key here is that you can import the values into your template group from a CSV if you so wish!

This technique works for both 2007 and 2012 MPs, so you can use it for building any kind of management pack.

Before we get started however, here are two disclaimers:

This post was written using a pre-release copy of the Visual Studio MP Authoring extensions shown below are currently pre-release software.  Everything shown below could be subject to change at release.

This is not necessarily the best way to discover and monitor services. A more ideal approach would be to evaluate the services and cluster discoveries based on more than a service installation status. Consolidated discoveries would most likely be more efficient and services should only be monitored if that monitoring is useful. Having said that, anything can be created using the techniques shown here and even using this method to implement 10 items will be much faster than doing it by hand.

Steps After the jump…

Read the rest of this entry »

Posted in Computing | Tagged: , , , , | 9 Comments »

Using the System.LogicalSet.ExpressionFilter in SCOM Management Packs

Posted by Matthew on June 14, 2012

What is it?

The System.LogicalSet.ExpressionFilter is a Condition Detection module that functions in a similar fashion to the System.ExpressionFilter module, except it allows you to evaluate multiple data items (usually Property Bags) as a group and then act based if any (or all) of the objects in the group match your criteria.

What is it used for?

You can use this module wherever you have a group of data items that you need to act upon only if all (or any number) meet a certain criteria.  Note that you can’t specify how many items must match, only that either all of them should match or at least 1.  If you need to have access to that kind of filtering, you need to use a Consolidator.

Some common examples include processing health state information for multiple performance counters, viewing the execution history of jobs or scheduled tasks, or checking the health of multiple components at once.  You can also use this to replace scripts that often check lots of criteria and produce a single healthy/unhealthy status code based on the evaluation of all of the criteria, which may open further opportunities to make use of cookdown between your workflows.  I’ll give an example of this below in the example Scenarios.

When using this in monitoring workflows, you will most likely be using the System.LogicalSet.ExpressionFilter for the Healthy health state (all items don’t match your unhealthy criteria) and then regular ExpressionFilters for your unhealthy state(s).

Configuration

Essentially, the configuration for this module is exactly the same as an ExpressionFilter, except you have two new Attributes.

EmptySet allows you to control what happens if no data items are provided to the module in the workflow

  • Passthrough – carries on the workflow to the next module.
  • Block- terminate the workflow at this module.

SetEvaluation allows you to specify if the group of data items should be passed on to the next workflow if

  • Any – If at least one item matches the expression filter, pass on all items.
  • All  – Only pass on items if ALL items match the expression filter.

It’s also worth noting, that since all data items that are passed along the workflow to output will be returned as Alert Context, this can make it very helpful when correlating items (often it’s not the matching log line you need, but the ones before and after that help you troubleshoot the problem!).  Just set the SetEvalaution attribute to Any.

Example Scenarios

Job Execution History / Previous Events Monitor

In this scenario, we are looking at multiple Property Bags that each describe a Job instance or perhaps an Event Log Entry every 10 minutes.  We only want to return a healthy state if none of the instances/events match unhealthy criteria.  If one or more do, then we want to flag an unhealthy state.  I’ve configured the below example as if i was receiving a property bag with job data, but you could substitute that for event codes, or really anything else that may appear in your data items.

Healthy State Module : System.LogicalSet.ExpressionFilter

  • Expression:
<Expression>
   <SimpleExpression>
             <ValueExpression>
<XPathQuery Type="String">Property[@Name='JobStatus']
             </ValueExpression>
             <Operator>Equal</Operator>
             <ValueExpression>
               <Value Type="String">Success</Value>
             </ValueExpression>
   </SimpleExpression>
</Expression>
  • EmptySet: PassThrough
  • SetEvaluation: All

Unhealthy State Module: System.ExpressionFilter

  • Expression:
<Expression>
    <SimpleExpression>
              <ValueExpression>
                <XPathQuery Type="String">Property[@Name='JobStatus']</XPathQuery>
              </ValueExpression>
              <Operator>NotEqual</Operator>
              <ValueExpression>
                <Value Type="String">Success</Value>
              </ValueExpression>
    </SimpleExpression>
 </Expression>
 

Replacing multiple Check logic in Scripts

The idea here is that we may have a script-based datasource that checks multiple aspects of an object to determine if it is healthy.  The script then has some internal logic (usually via If checks and And/Or expressions)  to sum up all of the checks and provide a health status based on the sum of all of the checks.  This is all very well and good, but what if we need to create a second monitor/rule/diagnostic that only depends on one (or a selection) of these checks?  We are going to have to implement another script (using most likely 90% of the same code) to provide that data, and now manage edits between the two scripts to keep them up to date.

Instead, what we could do is provide multiple property bags from our script, each one providing the pass/fail status of each check individually.  Then using System.LogicalSet.ExpressionFilters for our Healthy Condition and regular System.ExpressionFilters for our unhealthy condition(s) we can detect if any check failed.  If we want to raise unhealthy conditions only if ALL checks failed, then you just swap the two around, using System.LogicalSet.ExpressionFilters for your unhealthy states.

Multi-Instance Perfmon Counters

Essentially the idea here is that you want to monitor a performance counter with multiple instances, and you want to be alerted if any one of the instances (or perhaps all) trigger a certain threshold.  There is a great example of this already written up at the Operations Manager team blog, which i’ve linkced to below.

Checking Returned Rows from a Database

This one is pretty simple, using an System.OleDBProbe you can retrieve rows based on a Query from a database, and then check the rowset to see if any/all rows match your criteria.  Just make sure you configure the OleDBProbe to return a data item for each row, rather than all in one item!

Links

Hope that helps someone out.  If you have any specific examples you’d like me to walk through, just post a comment and I’ll see what I can do!

Posted in Computing | Tagged: , , , | 3 Comments »

Preview – New Official System Center Operations Manager MP authoring tools

Posted by Matthew on April 30, 2012

Disclaimer: This article is based on a preview of pre-release software.  Features and information may change between the time this article was written and time of release.

At MMS this year Microsoft revealed their two new Management Pack authoring tools which will upon release replace the venerable Scom authoring console as the MS official management pack authoring tool. It’s immediately worth noting that the Authoring Console will still be available and supported, but it will not be receiving any updates and therefore will not be able to understand the new Scom MP schema.

Previously the authoring console serviced a middle ground area in terms of user MP authoring skill. Those who were brand new to MP development often found the tool confusing, and the required knowledge level too steep. This was particularly common with ITPros who were looking to create a simple MP to monitor a “standard” windows application.

However, the authoring console was also missing several capabilities that are required for complex management pack authoring scenarios. In his MMS session Baelson Duque talked about how Scom’s own management pack to monitor itself is around 37,000 lines long, and authored by multiple people. As a management pack is a single file this made development of the MP very difficult and he admitted that many of the bugs in the management pack were introduced due to merge issues and copy-paste errors when duplicating module composition.

So to quote Brian Wren “rather than having one tool to rule them all” Microsoft have decided to instead develop two different tools to address both ends of the MP authoring skills spectrum. For ITPros who are looking to create simple/common management packs, we now have the Visio Management Pack Extension. For Developers who need the power of a full development environment we have the Visual Studio 2010 Management Pack Extension.

I’m going to put up full writeups on both tools, and documentation is already available on the Technet Wikis, but for now I’m going to briefly discuss both tools and their capabilities. It’s worth noting that both of these tools are V1 and as such there are a couple of limitations with both products that Microsoft are looking to address in future updates.

Visio Management Pack Authoring Extension

  • Requirements: Visio 2010 Premium
  • Intended audience: ITPros with basic to no knowledge of management pack XML
  • Expected Release: CTP to be released within the next few weeks, with RTM to follow within a couple of months.
  • Generated Schema: Scom 2007 schema version

Features

  • Drag and drop interface
  • All classes, monitors, rules, and the relationships between objects are created by dragging stencils onto the Visio drawing and connecting them together. Templates are included for quickly standing up common app scenarios (such as a service with a reg discovery, event collection etc)
  • Smart configuration of shape data only asks for relevant params
  • Shapes have intelligent configuration fields that the author fills out using simple terms, with many more complicated settings inferred from those simple choices. Fields are hidden until their value is relevant.
  • No knowledge of discoveries required, other than discovery condition
  • The inclusion of a class shape automatically sets up a discovery under the hood, with common non-script based discoveries used. The author is simply asked to provide the discovery condition (such as a reg key path)
  • Automatically creates views
  • When classes, monitors and rules are included on the diagram, a view is automatically created for the object. By specifying the same view path as a piece of configuration data objects will be included in the same view automatically (for example, perf counters visible in the same view)
  • Creates monitors and rules with cook down automatically
  • All monitoring objects created are forced to use MS best practises including full use of cook down.
  • XML element IDs are generated automatically in a consistent, human readable notation
  • All automatically generated object IDs are set to a sensible human readable value, rather than the GUID that the Scom console uses when creating content.

Visual Studio 2010 MP Extension

  • Requirements: Visual Studio 2010 professional (higher editions and versions supported)
  • Intended Audience: Developers/ ITPros with strong MP authoring skills.
  • Expected Release: RTM within the next few weeks.
  • Generated Schema: Scom 2007 schema version by default, with Scom 2012 and Service manager projects also available.

Features

  • Management Pack Browser
    • The extension includes a graphical way of representing and browsing the contents of your MP in the management pack browser. This view is similar to the object lists you’d see in the Authoring console, and allows you to jump straight to the element definition and perform further operations.
  • Snippets and template groups
    • Template groups allow the creation of Discoveries, Rules, Monitors and many other elements using property windows, object pickers, and model dialogs.  No more XML knowledge required than the 2007 Authoring console.
    • In order to assist in the creation and completion of repetitive objects, we can now use code snippets to effectively single instance an object definition. Fields are inserted into the XML definition which are then filled out in a tabular format by the author using the snippet, and at build time Visual Studio will create all the listed elements using that snippet, inserting the field values from each item row into the MP XML.  You can even import from a CSV File.
  • Fragments
    • A series of new file types have been included in the VS extension, including MP fragment files. These essentially allow for partial definitions of Management pack XML with out-of-order elements. This allows for multiple authors to easily work on the same MP, and means that elements such as display names and knowledge can be included next to their object, rather than somewhere else in the file!
  • Intellisense
    • Visual studio continues to provide autocompletion during typing by reading the MPschema and resolving references within your management pack.
  • Skeleton samples
    • The extension includes skeletons for common MP elements, to save you having to type (and remember!) the same static code over find over.
  • Scripts as resource files
    • Rather than placing scripts directly into XML, you can now attach PS1 files and VBS scripts to a project and have them injected into script data sources! This makes testing and script update/modification much, much easier.
  • Solution and Build Options
    • Include multiple MPs in a single solution
    • You can now provide your solution with a key file, in which case all MPs in the solution will build as signed MPs.
    • If your end solution is multiple management packs (typically a library, discovery and monitoring MPs), you can include these all within a single solution and set MP dependencies. The solution will then be built in the correct order so that you don’t need to keep manually resealing your library MPs.
    • During project or solution builds not only is the XML verified to ensure it is syntactically correct but also applies (some) MP best practise rules to the project and surfaces the results along with the XML verification.
    • At build you can import them into a management group and even launch the SCOM console/Web Console!

Ok, that should be enough to wet your whistle. Look out for a write up for each tool coming shortly!

Posted in Computing | Tagged: , , , , | Leave a Comment »

Designing Operations Manager management pack Discovery models to best support derived classes.

Posted by Matthew on March 6, 2012

Inspired by a previous post and some of the recent activities of friends and colleagues, I thought i’d share some recommendations on an important consideration you should keep in mind when designing the discovery model of your System Center Operations Manager 2007/2012 management pack.  This blog post should help you author your MPs in such a way that they are much friendlier to future expansion/customisation, whether by you or your customers.

Class Inheritance

As you may or may not be aware, in Operations manager objects (known as classes) can be “extended” in the GUI to support additional custom attributes that may be useful to your organisation.  You may have also noticed that many management packs implement a common object for a technology (such as an IIS Website or a SQL Database) that is then further specialised into every version you might encounter (IIS 6, IIS 7 etc).  You can construct a view or group that displays a version specific object (IIS2008Website) or any object that is based on that common ancestor (IISWebsite) which will then be version agnostic and display all IIS websites regardless of version.

This all happens because of class inheritance.  Every class in Operations Manager has a single Base class from which it stems, going all the way up to the common ancestor for all classes, System.Entity.  Every class inherits all class properties from its parent class (and any properties of their base class, known as ancestor classes) all the way up the chain to System.Entity.  This is why every object in SCOM has a Display Name property, because System.Entity defines it and every object is sooner or later based off of that class.

Operations Manager automatically considers any object to be an instance of itself but also an instance of any of its base classes.  So a SQL 2008 instance is itself a SQL Instance which is also a Windows Server role.  When you extend a class in the GUI (say, Windows Computer) what you are actually doing is deriving a new custom class that is based upon that parent class with your custom properties.

Discoveries and Targeting

Ok,  so with that out-of-the-way, why would you want to extend a class (or if you are authoring an MP, derive from an existing class)? Well reasons may include:

  1. Adding an attribute (Class property) that would benefit your organisation, such as Owner Contact details for a Server.
  2. Providing support for a version of an object that wasn’t previously included by the MP vendor/Author
  3. Speeding up creation time of your own MP by removing the necessity to define common properties over and over again.
  4. Allow targeting of monitoring workflow, Relationships and views at a group of objects, regardless of their specific version or implementation.

That last one is a critical point for MP Authors, as if I use the Server Role class as my base class when creating my server object, it automatically inherits all the relationships that will insert it into the Health rollup model for a Computer object.

Right enough rambling, now to the crux of the matter.  When you derive or extend an existing class SCOM may automatically give it all the property definitions of its parent classes but it doesn’t get the values of those properties automatically.  If you decide to go and make a SQL 2012 MP unless the discoveries for the existing objects have been setup in a certain way, all the inherited properties such as Database Name will be blank and it will be up to you to implement a discovery for them.

This is because discoveries are usually targeted either at the component that hosts them (Server role discoveries are usually targeted at the computer that runs them, database discoveries at the DB engine that hosts them) and they create the object with all of it’s properties discovered.  When you extend a class or derive a new one, they have no idea that your new class exists so they just leave it be.

The better option here (see caveats and considerations below) is to target a discovery at the component that hosts your class and a discovery at the class itself to discover its properties.  That way when you or someone else derives your class into a new version, your expertise at finding and populating the original properties is put into work because the discovery targeting sees the new class definition as being an instance of the base class that it was designed to populate.

Sample discovery model

No doubt some of you have taken issue with my claim because in your experience deriving or extending a class does automatically populate all the existing properties with values.  More than likely, this is because you’ve worked with a discovery configuration like I’ve described above without knowing it, such as with the Windows Computer objects.  There are a series of Discoveries targeted at Microsoft.Windows.Computer in the core operations manager MPs that are responsible for discovering properties such as logical CPU count and whether or not the computer is actually a Virtual (Cluster) instance.  Since pictures are better than words (and I’m not far off one thousand already) here is a diagram that explains what I’m talking about.

Default Implementation of Microsoft.Windows.Computer discovery model

The diagram above actually rolls several different property discoveries into one object, but hopefully you get the idea.  Now if I were to extend (via the SCOM GUI) or derive (MP authoring) the Computer class with my own custom version containing a new property, I would only be responsible for discovering the existence of my class and any new custom properties I’d added.  Indeed, if I was following this approach here i’d implement my property discovery as a second discovery so that any class that extends or derives from MY class in the future also benefits from this.

The diagram below now shows this; I have added a discovery (which perhaps targets the computer and looks for the presence of a “System Owner” registry key) that is responsible for creating my custom class, and another which discovers my custom attributes I’ve added (reads the above reg key and populates that value onto the object).  It might look like a lot of work in the diagram but honestly in the authoring console this is very simple to do.

Custom Attribute extension and discovery

For those of you wondering how to make a discovery submit property information for an existing object, it’s extremely simple.  Just discover your class with its key properties again (you don’t need to re-test for your object, you know it exists already), along with all your newly found properties, and reference counting (something I talked about in my previous blog post) will take care of the rest.  Don’t worry about blanking out any existing properties that you don’t include, SCOM will leave those intact.  During un-discovery SCOM is also smart enough to handle your self referential discovery and make sure the object isn’t perpetually discovered once your component ceases to exist.

As an added bonus, implementing the discovery model in this fashion also allows you to separate out the discovery interval of your class from the discovery interval of your properties.  This can help reduce/prevent discovery churn and will allow your MP users to further customize their monitoring experience.

Caveats and Considerations

As always it seems there are some considerations you should take into account before doing this.  The first and most important is performance.  Whilst generally speaking performing two sets of queries (one to discover the object, one to create its properties) isn’t that taxing on most data sources, you might want to think twice about this if you are using a remote datasource that isn’t very well optimised.  Most of the time if you are doing WMI or SQL queries remote remember that your second query will usually be much cheaper since rather than looking for a set of matching criteria you are only looking for records that match your objects ID.  Likewise your first query to establish the existence of the object can be optimized not to request columns/properties that only the second discovery needs.

As I mentioned above, if performance is a concern you can control the intervals of your two discoveries and set them to something suitable.  Remember this is discovery not monitoring, you don’t need to update properties every 10 minutes.

The second consideration you may want to take into account is complexity.  If you are implementing a management pack with dozens of objects using custom datasources you may not want to implement an extra set of discoveries, especially if your objects only have a handful of properties.  That’s fine, you just have to make sure you balance the demands with the rewards of taking the above model on board.  If you don’t see yourself deriving lots of classes, or your customer’s wanting to extend your classes with your support, then you’re just saving yourself unnecessary effort.

In my opinion though, it’s nearly always worth it.  Feel free to leave a comment if you’d like to see a specifc example of this kind of thing implemented (either using one of the SCOM built in modules, or a custom script).

Posted in Computing | Tagged: , , , | Leave a Comment »