.\Matthew Long

{An unsorted collection of thoughts}

Posts Tagged ‘MP Dev’

SCOM – Updated Visual Studio Authoring Extensions released

Posted by Matthew on October 21, 2013

Hi Guys,

A new version of the System Center Visual Studio Authoring Extensions has now been released, featuring many bugfixes, improvements and enhancements.  One of these that I’ve long been awaiting is..

Visual Studio 2012 and 2013 are now supported!

This means all of the new VS 2012/3 features that you’ve been waiting to use are now go.  I know items such as previewing files and source control enhancements

Download link : http://www.microsoft.com/en-us/download/details.aspx?id=30169

You’ll need to uninstall the previous version if you’ve already got it installed, as the installer won’t perform an upgrade (at this time, I also uninstalled Visual Studio 2010, as I only had that version installed for MP Authoring, everything else I’m doing has since moved on to more recent versions of VS).

Some of the notable changes are (in no particular order):

  • Support for Visual Studio 2012 and 2013
  • Language packs other than ENU are now built correctly,
  • Picker dialog boxes now traverse Project references properly, as does the Find All references tool.
  • No more access denied error messages when trying to build a 2012 MP that has been auto deployed to a management group.
  • Import wizard can now resolve MP references inside MP bundles
  • Snippet Editor no longer displays duplicate columns when an ID has been used multiple times in the snippet template.
  • Monitor templates now set Auto resolve to “True” by default and Severity to “MatchMonitorHealth”
  • MP Context Parameter replacements can now be used in Module parameters that have non-string data types.
  • MP Simulator can now simulate Agent Task workflows
  • MPBA “Resolve” option now works correctly with generated fragment files

They’ve also changed the Project templates to make it a bit more explicit which versions of Operations Manager each project type supports, and added 2012 SP1 and 2012 R2 explicitly as new project types.  Sadly still no xml  templates provided for datasource, monitortype, probe or condition detection modules.

VSAE Project Types

Full change list: http://social.technet.microsoft.com/wiki/contents/articles/20357.whats-new-in-visual-studio-authoring-extensions-2013.aspx

Remaining Issues:

  • MPBA Tool still doesn’t resolve Alert Strings where the default Language is not the workstation language (switch your machines language and all the missing alert string messages will disappear)
  • No intellisense support when configuring module parameters outside of Template groups (painful when creating custom modules).
  • Go To Definition doesn’t function if you are already previewing an existing item (essentially only works when called from fragments in your projects).
  • Informational schema errors are still logged when you configure module parameters outside of a template group.
  • When creating template snippets, intellisense still only provides snippet specific options, which can make authoring snippets challenging.

Happy Authoring!

Posted in Computing | Tagged: , , , , , , | 5 Comments »

Switch a Visual Studio Authoring Extension MP project between SCOM 2007 and SCOM 2012

Posted by Matthew on February 8, 2013

A couple of times now I’ve started a new MP project in Visual Studio using the Operations Manager Visual Studio Authoring Extensions, and about 45 minutes into my development realized I’ve created the wrong project type.  There have also been a few occasions where requirements change and it makes more sense to change the project from a Core Monitoring (2007) management pack into a 2012 “add-on” pack.

It’s actually quite a simple change to make, providing you know which files to edit.

Project File

  1. Open the .mpproj file that represents your management pack project (NOT the solution file).
  2. Locate the MPFrameworkVersion element in the first PropertyGroup section, and modify it according to the version you want the project to build:
    1. For 2012, use <MpFrameworkVersion>v7.0</MpFrameworkVersion>
    2. For 2007, use <MpFrameworkVersion>v6.1</MpFrameworkVersion>
  3. If downgrading from 2012 to 2007, under the ItemGroup element containing the MP Reference elements, remove any references to MPs that don’t exist in 2007.
  4. Save the file.

Management Pack Fragments

Next, you’ll need to update the SchemaVersion attribute of the ManagementPackFragment element at the start of every .mpx file in your project.  The good news is that you can just use Visual Studio’s find and replace option to replace this in all files in the solution simultaneously!

  1. Open the Project/solution in visual studio.
  2. Open one of the .mpx files, and hit Ctrl+H to open the “Find and Replace” window.
  3. To go from 2007 to 2012, in the “find what” box, enter <ManagementPackFragment SchemaVersion=”1.0″ and in “replace with” enter <ManagementPackFragment SchemaVersion=”2.0″
  4. Otherwise, do the reverse and in the “find what” box, enter <ManagementPackFragment SchemaVersion=”2.0″and in “replace with” enter <ManagementPackFragment SchemaVersion=”1.0″
  5. Make sure “Look in” is set to Current Project.
  6. You can now either click “Find Next” and “Replace” repeatedly if you aren’t comfortable letting VS just replace everything, or hit “Replace All

Save all files in your project, and attempt a build.  If you get any build errors relating to Management pack references or Schema Versions, just look for the reference/file that you missed and update the values.

Needless to say, you should always think carefully before you do this, and thoroughly test the MP to ensure that there aren’t any side effects from now using earlier/later versions of the default SCOM management packs.  Going from 2007 -> 2012 is safer as 2012 is backwards compatible and will support your previous module configurations.

Hope that helps!

Posted in Computing | Tagged: , , , , , | 1 Comment »

Referencing MPBs and other SCOM 2012 Management Packs using Visual Studio Authoring Extensions

Posted by Matthew on January 29, 2013

One of the great features of the SCOM Visual Studio Authoring Extension is the ability to easily view the definitions of content in sealed management packs you are referencing.  Sometimes (especially if you are exporting a MP from the SCOM management group for further customization) you’ll need to reference an MP that is installed in your SCOM management group but that isn’t included in the VSAE package, or isn’t available online.  Most likely this is a system MP or a MP that’s only available as part of a bundle (If your MP contains 2012 dashboards, it’s probably got a reference to Microsoft.SystemCenter.Visualization.Library or Microsoft.SystemCenter.Visualization.Internal).  How can you reference these?

Well if you’ve still got your SCOM installation media to hand you can find all of the MPs imported at installation in the ManagementPacks folder.  This can be great if you’ve got a required reference, or even just want to have a look at how the product group implemented some piece of functionality!

Note however that some of the MPs are wrapped up in MPBs (Management pack bundles).  It’s still perfectly possible to reference these in your project:

  1. Make sure you’re editing a 2012 MP project.  If you’ve chosen an Operations Manager Core MP, you’ll be unable to add a reference to MPBs as they didn’t exist at that time.
  2. In Solution Explorer (by default on the right panel) right click on “References”  and choose “Add References from Management Pack bundle…”
  3. Browse to and select your file (Lets say Microsoft.SystemCenter.Visualization.Internal.mpb from the SCOM 2012 install media)
  4. You’ll be then given a dialog detailing the MP(s) that were added from the bundle into the project.
  5. Now you can view the definition of any module you need from that MP via the Management Pack browser (View -> Management Pack Browser) or by selecting the module in-code and hitting F12 (go to definition hotkey).

Pretty simple procedure!  If you have accidentally created your MP project as a Operations Manager Core (2007) MP, you can modify your solution files to upgrade it.  Think carefully about doing this however – would you be better off including the extra functionality you need in an additional “feature” pack?  If you decide to go ahead and upgrade/downgrade, you can follow the steps in this blogpost.

Oh and one final tip: If you don’t want to find yourself having to constantly keep hold of the 2012 install media, you can just copy the MPs into the VSAE installation folder (they will now automatically be resolved when referenced).  You’ll find them in Program Files (x86)\System Center 2012 Visual Studio Authoring Extensions\References\OM2012.  I’d advise not to overwrite any of the existing MPs (you’ll get some conflicts) to be safe.  Also, be careful about copying updated versions of MPs that come from CU’s/SPs into this location, unless you don’t mind all the MPs you create requiring that updated version of the MP in future.  Not all customers have/can install the latest patches..

Hope that helps!

Posted in Computing | Tagged: , , , | Leave a Comment »

SCOM : Revisiting the DayTimeExpression option of System.ExpressionFilter

Posted by Matthew on August 17, 2012

You may remember a few weeks ago I did an article on the flexibility and power of the System.ExpressionFilter. In that post I commented that I wouldn’t go into any depth on the DayTimeExpression expression type as you could accomplish most of the day/time comparison tasks with a System.ScheduleFilter instead.  Well, no sooner than writing that, I found myself authoring a workflow where that was not the case.

The customer was looking for a three state unit monitor, that had some complex health behavior:

  1. If the result of a scheduled Job was successful, switch to a Healthy state.
  2. If The result of the job is a Failure, switch to a Critical state.
  3. After 15 Minutes since the job has started, if it is not Successful or Failed, switch to a Warning State.
    1. After 30 minutes, if it has still not succeeded, go into a critical State.

Normally this kind of complex logic would be included as part of a custom script, however in this case we were using a built-in module to perform the query of our Job status, so it seemed very inefficient to then have to go into a script just to calculate the time differential (especially if due to the return code, the time calculation is irrelevant).

Why couldn’t we use a System.ScheduleFilter?

The normal approach when doing time comparisons is to use a System.ScheduleFilter to block the workflow from posting items to the appropriate health states in a MonitorType.  However, here we had a combination approach where we needed to use boolean AND and OR operators, which is something that the ScheduleFilter cannot provide, and the SCOM workflow engine only allows a single workflow branch to lead to a given Health State.

Our module didn’t return the current time as an additional property, so in order to get the current date I decided to just use the timestamp found on every Dataitem in a SCOM workflow.  This required a little bit of XPath knowledge however, as nearly all the XPath Query examples I’ve seen documented with SCOM are for accessing the child properties of a DataItem, not its own attributes.

So without further ado here are the Expressions I eventually came up with..

Healthy Expression

    <RegExExpression>
      <ValueExpression>
        <XPathQuery Type="String">//*[local-name()="StdOut"]
      </ValueExpression>
      <Operator>MatchesRegularExpression</Operator>
      <Pattern>^(.*SU)$</Pattern>
    </RegExExpression>

Nothing groundbreaking here, if at any time the result code matches the regular expression “SU” then we know our job succeeded (those of you who have done cross-platform monitoring with SCOM may recognize the XPath Query i’m using, however that’s not really the focus of this article).

Warning Filter

    <And>
      <Expression>
        <RegExExpression>
          <ValueExpression>
            <XPathQuery Type="String">//*[local-name()="StdOut"]
          </ValueExpression>
          <Operator>DoesNotMatchRegularExpression</Operator>
          <Pattern>^(.*SU|.*FA|[\s]*[\s]*)$</Pattern>
        </RegExExpression>
      </Expression>
      <Expression>
        <DayTimeExpression>
          <ValueExpression>
            <XPathQuery Type="DateTime">./@time</ValueExpression>
          <StartTime>79200
          <EndTime>80100
          <Days>62</Days>
          <InRange>true
        </DayTimeExpression>
      </Expression>
    </And>

The first part of our Warning expression (lines 2-9) is just checking that the Job status code is not one which instantly pushes us into another health state (remember, a return code of Failure is an automatic critical health state, regardless of time).  Lines 12-20 are what we are interested in here.  We’ve specified that we want to do a DayTime comparison, and we are going to compare the “./@time” XPath query against a time range.  I’ll come back to the XPath in a second, first let me explain the rest of the parameters.

  • StartTime is the start of the time range, specified in Seconds from Midnight.
  • EndTime is the end of the time range, specified in Seconds from Midnight.
  • Days is the usual Days of the week mask used by SCOM.  To specify which days are in the range, simply add up the corresponding values from the table below (so Mon through Friday is 62).
    • Sunday = 1
    • Monday = 2
    • Tuesday = 4
    • Wednesday = 8
    • Thursday = 16
    • Friday = 32
    • Saturday = 64
  • InRange is a boolean operator that specifies if this expression is TRUE if the input time is inside, or outside the time range specified.

So what i’ve done here is specified that the warning filter will match only for the 15-30 minute period after the job has started (remember this is a scheduled job, so it’s the same time each day), and only if the status code is not SU (Healthy) or FA (Critical).

The XPath Query on line 14 of “./@time” is a little weird, and is to do with the implementation of XPath in the ExpressionFilter module.  Essentially it’s already scoped to look at the elements within a workflow Data item, so we have to explicitly go back up the tree and say we want the Time attribute of the DataItem itself.  The value you use to perform your date time calculation must be in the DateTime format (2012-08-17T14:23:16.291Z) for this to succeed.  Thankfully the Time value on DataItems always is.

Critical Filter

    <Or>
      <Expression>
        <RegExExpression>
          <ValueExpression>
            //*[local-name()="StdOut"]
          </ValueExpression>
          <Operator>MatchesRegularExpression</Operator>
          <Pattern>^(.*FA|[\s]*[\s]*)$</Pattern>
        </RegExExpression>
      </Expression>
      <Expression>
        <And>
          <Expression>
            <RegExExpression>
              <ValueExpression>
                //*[local-name()="StdOut"]
              </ValueExpression>
              <Operator>DoesNotMatchRegularExpression</Operator>
              <Pattern>^(.*SU|[\s]*[\s]*)$</Pattern>
            </RegExExpression>
          </Expression>
          <Expression>
            <DayTimeExpression>
              <ValueExpression>
                <XPathQuery Type="DateTime">./@time</XPathQuery>
              </ValueExpression>
              <StartTime>78300</StartTime>
              <EndTime>80100</EndTime>
              <Days>62</Days>
              <InRange>false</InRange>
            </DayTimeExpression>
          </Expression>
        </And>
      </Expression>
    </Or>

And finally, we have the Critical health state Expression filter.  This uses an OR condition to short-circuit and allow us to instantly go to a critical health state if the job result is a failure without having to check the time.  Otherwise, we carry on and evaluate the AND expression and make sure that the job result isn’t a Success code and we are now outside our 30 minute window since the job started.  Note that the InRange parameter on line 30 is now false, so that the expression will only evaluate to true if the current time is outside our time window of 0-30 minutes since the job started.

And there we have it!  Unfortunately the SCOM 2007 Authoring console doesn’t allow you to use the wizard when trying to use DayTime/Exists expressions, so you’ll need to hit the “edit” button and get your hands dirty with XML yourself.  My suggestion would be to create your logic using the wizard with a placeholder line in place where you want your DayTimeExpression, and then to edit it by hand to include that line afterwards.  Just don’t try to open your ExpressionFilter in the wizard using the “configure” button afterwards, as it will break your expression.  You’ll have to keep using the edit button and editing it by hand.

As always, I hope that helped someone out, and feel free to post a comment if I haven’t made anything clear or you have practical example you’d like to work through.

Posted in Computing | Tagged: , , , , | 1 Comment »

The SCOM Unsung Hero – Using the System.ExpressionFilter Module

Posted by Matthew on July 3, 2012

I’ve decided to write a blogpost as tribute to the unsung hero of Operations Manager, the one module that gets used in virtually every workflow but is rarely the focus of attention.  Without this module cookdown would be mostly impossible and whether you are creating your own modules or using the Wizards in the SCOM console/ Authoring Console / Visio Extension, it’s always there to assist you.  I’m talking of course about the System.ExpressionFilter.

What is it?

The System.ExpressionFilter is a condition detection module and sibling to the System.LogicalSet.ExpressionFilter.  It’s function is to examine items in the operations manager workflow and either pass them on or remove (drop) them from the workflow.  If no items matched the filter at all, the workflow terminates.

In only has a single configuration parameter, but it’s a very powerful one, as it accepts the ExpressionType configuration.  In reality most of this article will be talking about the syntax of ExpressionType.

It’s also a very lightweight module, and should be used whenever you need to do any kind of determination or filtering.  Whenever you are using a service monitor, event log reader, SNMP probe, the parameters you are filling in are nearly all being sent to this module not the data source!

When should you use it?

  • 90% of the time, if you want to implement cookdown for your workflow, you’ll be using this module.
  • You want to add further filtering onto an existing rule in an unsealed management pack.
  • You want to perform filtering of any kind of data.
  • You are implementing a MonitorType (not the same thing as a monitor)

Configuration

The System.ExpressionFilter only takes a single parameter, of type ExpressionType.  This is an inbuilt data type in SCOM that allows you to specify some kind of evaluation criteria that operations manager will run on every item sent to the module.  It should be noted that each item will be evaluated individually (if you need to do them as a linked set, see the System.LogicalSet.ExpressionFilter).

Expression filters are very complex types.  They support nested expressions using the And and Or group constructions, and you also have access to NOT.  Below i’ll give you a sample of the type you are going to use 75% of the time..

 SimpleExpression – Compare output of PropertyBagScript to value

<Expression>
       <SimpleExpression>
              <ValueExpression>
                     <XPathQuery Type="String">Property[@Name='Status']</XPathQuery>
              </ValueExpression>
              <Operator>Equal</Operator>
              <ValueExpression>
                     <Value Type="String">Healthy</Value>
              </ValueExpression>
       </SimpleExpression>
</Expression>

This is the most common filter you’ll use in SCOM. It’s purpose is to compare the output of a module (in this case, a PropertyBagScript value called “Status”) with a value. This can either be a static value, or one passed in to the workflow as part of a $Config/$ parameter.

We start off opening with an <Expression> tag, which is always the starting and end tag for each evaluation.  Then we’ve stated on line 2 that we want to use a SimpleExpression, which just does a straight comparison between two items (the first ValueExpression and Second ValueExpression).  The valid operators for use with a SimpleExpression are:

  • Equal
  • NotEqual
  • Greater
  • Less
  • GreaterEqual
  • LessEqual

Note that when specifying the operators they are case-sensitive, so they need to be entered exactly as above.  Finally our ValueExpressions (the left and right side of the comparison) are either of type Value or XPathQuery.  You use Value when using either static values or $Config/$ or $Target/$ parameters, and XPathQuery when you want to examine the output of the previous module.

Finally you’ll note that both Value and XPathQuery have a Type attribute – SCOM will attempt to cast the data into that type before performing the query.  So if you are comparing two numbers make sure you have the type set to Integer, otherwise it will attempt to calculate if the ‘letter’ 3 is greater than ’86’, which probably isn’t your intent.  The available types are:

  • Boolean
  • Integer
  • UnsignedInteger
  • Double
  • Duration
  • DateTime
  • String

The SCOM 2007 Authoring console will by default always set the type to “String”, so keep an eye on that.  Also, if the type conversion fails, SCOM is going to throw an error into the event log and the item will not be processed.

 Logical Operators – And Or and NOT

You can group and reverse the result of expressions using the <And>, <Or> and <Not> expression elements.  How they are implemented is a wrapper for your <Expression></Expression> tag that themselves are expressions!  Sounds complicated, but with an example it becomes much clearer:

<Expression>
     <And>
          <Expression>
               <!-- First expression here -->
          </Expression>
          <Expression>
               <!-- Second expression here -->
          </Expression>
     </And>
</Expression>

So above we have two expressions that most both evaluate to true in order for the whole outer expression to be true.  The construct is the same for <Or> and <Not>, and you can even nest groups in groups for truly powerful expressions!  <Not> may only contain a single <Expression> (that of course, could be a group!), but <And> and <Or> can contain two or more expressions if you need to group on multiple items.

One important thing to note is that groups support short circuiting.  What this means is that if we examine one expression in an And/Or group and we can deduce from the first expression that the whole thing will be true or false (perhaps we are using And and the first item is False) then SCOM won’t bother to evaluate the second expression, saving time and performance.  So nest away!

Exists – Does my data item contain a property?

Much like a type conversion failure, if an XPathQuery value (as part of a SimpleExpression) doesn’t resolve to anything, say because that data item doesn’t contain an expected property, then the Expression will fail and that item will be dropped.  So if you are dealing with a property that doesn’t always show up (regardless of if it has a value, SCOM can deal with empty/null properties) you’d be wise to use the <Exists> expression.  It’s also useful if you don’t care about the value of a property, merely if it exists or not.

<Expression>
               <Exists>
                      <ValueExpression>
                             <XPathQuery Type="Integer">Params/Param[1]</XPathQuery>
                      </ValueExpression>
               </Exists>
        </Expression>

Here we are checking to see if an event log entry has at least 1 parameter.  You can also use <Value> instead of XPathQuery if you wanted to check to see if a $Config/$ parameter exists so you know if an optional parameter on your module has been specified or not.

If you need to check the result of a value that may or may not exist, you’ll want to take advantage of the short circuiting of thegroup by combining an exists check with your value check.  Make sure the exists expression is first in the group, and that way if the property doesn’t exist SCOM won’t bother trying to read the property (which, as stated above will cause the module to fail).  I’ve included an example of this below!

<Expression>
     <And>
        <Expression>
               <Exists>
                      <ValueExpression>
                             <XPathQuery Type="Integer">Params/Param[1]</XPathQuery>
                      </ValueExpression>
               </Exists>
        </Expression>
        <Expression>
               <SimpleExpression>
                      <ValueExpression>
                             <XPathQuery Type="Integer">Params/Param[1]</XPathQuery>
                      </ValueExpression>
                      <Operator>Less</Operator>
                      <ValueExpression>
                             <Value Type="Integer">Params/Params[1]</Value>
                      </ValueExpression>
               </SimpleExpression>
        </Expression>
     </And>
</Expression>

Regular Expressions

If you want to do powerful (or simple!) regular expression comparisons, then the ExpressionFilter has got you covered.  I’m not going to go into a huge amount of depth on this one, because by now you should be getting an idea of how this works.  I’ll just show you the syntax and then list the regex pattern styles you can use.

<Expression>
       <RegExExpression>
              <ValueExpression>
                     <XPathQuery Type="String">EventPublisher</XPathQuery>
              </ValueExpression>
              <Operator>ContainsSubstring</Operator>
              <Pattern>Microsoft</Pattern>
       </RegExExpression>
</Expression>

ValueExpression is the same as with a SimpleExpression, so you can compare against incoming data items on the workflow or input parameters.  Operator allows you to specify what type of matching you’d like to perform:

  • MatchesWildcard– Simple wildcard matching using the below wildcards
    • # – Matches 0-9
    • ?  – Any single character
    • * – any sequence of characters
    • \ – escapes any of the above
  • ContainsSubstring – Standard wildcard containment, if this exists anywhere in the string (implemented as ‘^.*pattern.*$’)
  • MatchesRegularExpression – Full regular expression via .Net (Note this is not the same as Group Calculation modules, which use Perl).
  • DoesNotMatchWildcard – Inverse of MatchesWildcard.
  • DoesNotContainSubstring – Inverse of ContainsSubstring
  • DoesNotMatchRegularExpression – Inverse of DoesNotMatchRegularExpression

Finally, Pattern allows you to specify your regular expression.  Note that you don’t need to wrap this in “” or ”.  Obviously you can nest these in groups if you need to perform multiple regular expressions or

Oh, and if you have any questions on Regular expressions, ask Pete Zerger.  He loves regular expressions (you can tell him I sent you ;))!

DayTimeExpression

Finally we have , which is used to determine if a datetime is inside or outside a range.  This one is less used, as we have another built in module (System.ScheduleFilter) which we can use for this kind of comparison that is a bit more powerful, and can use the current time of the workflow, rather than having to get that value from your data item.  It only allows for Day (Sun-Saturday) and time comparisons.  There’s no ability to specify exceptions or different windows for each day either, something the ScheduleFilter does implement.

I’m not going to detail into it here, but you can find documentation for it on MSDN at the following link.

Example Scenarios

Essentially, any time you want to filter or compare a value you can use this module!  Normally you’ll be using it to either manage output from a datasource or further scope a rule so that it only alerts when it meets your extra criteria.

The other time you’ll commonly use it is when implementing your own MonitorType.  You’ll add one System.ExpressionFilter for each health state the monitortype provides, and then set the filters up so that they use mutually exclusive values to determine what health state your system is in.  I won’t drag this post out any further with examples however, as there are plenty on the web of this already and they are always quite specific to the scenario.

Links

MSDN documentation – http://msdn.microsoft.com/en-us/library/ee692979.aspx

Hope this proved helpful, and as always if you have any specific questions feel free to post a comment with what you need and i’ll see what I can do!

(Sorry Pete!)

Posted in Computing | Tagged: , , , , | 4 Comments »

Query a database without scripting as part of SCOM monitoring – The System.OLEDBProbe module

Posted by Matthew on June 23, 2012

A fairly common monitoring scenario is the need to query a database somewhere (normally SQL, but as long as you have a relevant OLEDB driver on your agents, whatever you need!) and based on the results of the query trigger some kind of workflow. I’ve seen it’s used with monitors, Alert and collection rules and even Discoveries!

Obviously you can do this via script, but perhaps you have a simple query and no need to do any posts query processing (often this can be done as part of your query anyway). In these cases, you can also use a built in module called the System.OLEDBProbe to query the DB and do the lifting for you!

What is it

The System. Module is a built in probe module that will use a OLEDB provider/driver on the system to make a database query from the hosting agent. The database, query and other settings are defined via probe configuration and do not need to be hard coded into the MP (though obviously the query usually is). The query can be modified using context parameter replacement prior to execution so you can dynamically insert information into it if need be. It supports integrated and also manually specified credentials, usually via Run As Profiles.

It also has the nifty ability to retrieve the database settings from specified registry keys, which can avoid the need to go out and discover those pieces of information. This makes it quite suitable for attaching onto existing classes from other management packs.

When you should use it

  • You know in advance which columns you need to access.
  • You know how to implement your own module.
  • You have a suitable OLEDB provider on your agent (common windows ones included by default)
  • You don’t need to perform complex processing on each returned row.

Configuration

Required

  • ConnectionString – The connection string you wish to use to connect to the database.  On windows 2003 or later, this is encrypted by the module.  if you are using Integrated security, you do not need to specify credentials as long as you are using a run as profile with this module (but make sure you flag the connection as using integrated security!).
  • Query – The query you wish to run against the database. Supports context parameter replacement, so you can use $Config/$ variables etc in your query.

Optional

  • GetValue – (true/false) Whether the results of the query should be returned or not (set to false if you just want to connect to the DB, and you don’t care about the results of the query).
  • IncludeOriginalItem – (true/false) Determines if the resulting data item(s) will contain the item that originally triggered this module.  Note that the data is returned as CData, so you won’t be able to perform XPath queries directly against it.
  • OneRowPerItem – (true/false) Should all resulting data be returned in a single data item, or 1 data item returned for each row in the query results?  Normally setting this to true is more useful, as you’ll often want a condition detection to process each row individually, and you won’t know the order (or number) of resulting rows.
  • DatabaseNameRegLocation – Registry key where we can find the database name.  Must be under the HKLM hive.
  • DatabaseServerNameRegLocation – Registry key where we can find the database server name (and instance, if required).  Must also be under the HKLM hive.

SCOM 2007 R2 and above only

  • QueryTimeout – (Integer) Optional parameter that allows you to perform a query timeout.
  • GetFetchTime – (true/false) Optional parameter that allows you to specify that the resulting data item(s) should contain the fetch time for the query.

Personally, I tend to omit the R2 only parameters as they usually do not add much to the workflow and will restrict your environment.  Obviously if you are making this MP for inhouse resources you are free to implement against whatever version of SCOM you have!

An important parameter is the OneRowPerItem.  If set to false when you get back data the data item will look like the below snippit (i’ve omitted the other elements to save space)


<RowLength></RowLength>
    <Columns>
    <!-- Data for first row returned -->
       <Column>Data in first column</Column>
       <Column>Data in Second column.</Column>
    </Columns>
    <Columns>
    <!-- Data for Second row returned -->
       <Column>Data in first column</Column>
       <Column>Data in Second column.</Column>
    </Columns>

This can make processing the results in further modules a pain, since your XPath Query is going to have to specify which row and column specifically you want to access. If you instead set OneRowPerItem to true then you’ll get multiple return items and can filter them using an Expression filter with a simple syntax such as $Data/Columns/Column[1]$ You may also wish to filter on the RowLength property to establish if any rows were returned. Remember that the module will return a data item if it succeeded to connect but doesn’t have rights to query, so check that data was returned before you try to do something with it!

Example scenarios

Normally if I’m going to use an OleDBProbe to access a database repeatedly I’ll create my own probe module that sets up the settings I’m going to need and is already set to use my MP’s run as profile for DB access.  That way I don’t have to keep specifying it over and over again.  Below is a sample where I’ve done this, and configured everything other than my query to pass in for a SQL database probe.  Now all my monitors and rules that make use of this know where to locate the DB and what other query options to use automatically (along with credentials).

<ProbeActionModuleType ID="DBProbe.Library.Probe.DatabaseOledbQuery" Accessibility="Public"   RunAs="DbProbe.Library.SecureReference.Database" Batching="false" PassThrough="false">
    <Configuration>
<xsd:element minOccurs="1" name="Query" type="xsd:string" />
<xsd:element minOccurs="1" name="OneRowPerItem" type="xsd:boolean" />
    </Configuration>
<ModuleImplementation Isolation="Any">
        <Composite>
            <MemberModules>
<ProbeAction ID="PassThru" TypeID="System!System.PassThroughProbe" />
<ProbeAction ID="OledbProbe" TypeID="System!System.OleDbProbe">
                    <ConnectionString>Provider=SQLOLEDB;Integrated Security=SSPI </ConnectionString>
$Config/Query$
                    <GetValue>true</GetValue>
                    <IncludeOriginalItem>false</IncludeOriginalItem>
$Config/OneRowPerItem$
                    <DatabaseNameRegLocation>SOFTWARE\MyRegKey\Database\DatabaseName</DatabaseNameRegLocation>
                    <DatabaseServerNameRegLocation>SOFTWARE\MyRegKey\Database\DatabaseServerName</DatabaseServerNameRegLocation>
ProbeAction>
            </MemberModules>
            <Composition>
                <Node ID="OledbProbe">
                    <Node ID="PassThru" />
                </Node>
            </Composition>
        </Composite>
ModuleImplementation>
    <OutputType>System!System.OleDbData</OutputType>
    <TriggerOnly>true</TriggerOnly>
ProbeActionModuleType>

Here I’ve done the same thing, only without using registry keys to specify the location of my DB.  Normally I’d pass the DB details from my targeted class as I’ll have some property that has been discovered defining where the database is.

<ProbeActionModuleType ID="DBProbe.Library.Probe.DatabaseOledbQuery" Accessibility="Public"  RunAs="DbProbe.Library.SecureReference.Database" Batching="false" PassThrough="false">
    <Configuration>
<xsd:element minOccurs="1" name="DatabaseServer" type="xsd:string" />
DatabaseName" type="xsd:string" />
        <xsd:element minOccurs="1" name="Query" type="xsd:string" />
        <xsd:element minOccurs="1" name="OneRowPerItem" type="xsd:boolean" />
    </Configuration>
    <ModuleImplementation Isolation="Any">
        <Composite>
            <MemberModules>
<ProbeAction ID="PassThru" TypeID="System!System.PassThroughProbe" />
            <ProbeAction ID="OledbProbe" TypeID="System!System.OleDbProbe">
Provider=SQLOLEDB;Server=$Config/DatabaseServer$;Database=$Config/DatabaseName$;Integrated Security=SSPI
                <Query>$Config/Query$</Query>
                <GetValue>true</GetValue>
                <IncludeOriginalItem>false</IncludeOriginalItem>
                <OneRowPerItem>$Config/OneRowPerItem$</OneRowPerItem>
            </ProbeAction>
            </MemberModules>
            <Composition>
                <Node ID="OledbProbe">
                    <Node ID="PassThru" />
                </Node>
            </Composition>
        </Composite>
    </ModuleImplementation>
    <OutputType>System!System.OleDbData</OutputType>
    <TriggerOnly>true</TriggerOnly>
</ProbeActionModuleType>

Simple/Specified Authentication

If you don’t (or can’t) want to use Integrated security, you can pass credentials using simple authentication and a run as profile. DO NOT hard code the credentials, these are now stored in plain text and readable. The run as profile creds are encrypted and the connection string is encrypted across the wire, the MP isn’t!

The syntax for this is (depending on your Ole provider, here it’s SQL) shown below.  Obviously replace the text in italics with your values.

Provider=SQLOLEDB;Server=ServerName;Database=databaseName; User Id=$RunAs[Name=”RunAsIdentifierGoesHere“]/UserName$;Password=$RunAs[Name=”RunAsIdentiferGoesHere“]/Password$

Scenario 1 – Monitoring

Fairly simple one this, you want to monitor a database for a certain condition.  Perhaps you are getting the result of a stored procedure, checking the number of rows in a table (by using the databases query langauge) or checking rows for a certain value (perhaps error logs?).  Once queried, you pass the data items onto an System.ExpressionFilter module to filter for your desired criteria and alert as appropriate.

Scenario 2 – Collection

Another fairly common one, do the same thing as above as part of an event collection or performance collection rule.  This could even be ignoring the data and just checking how long it took the query to run, via the InitializationTime, OpenTime, ExecutionTime and FetchTime (if you’re R2 or 2012) properties of the output data.  Following your System.OleDBProbe module you’ll usually use one of the mapper condition detection modules to generate event or performance data (these are quite nicely documented around the web and on MSDN.  Normally done with property bags, but the principle is the same).

Scenario 3 – Discovery

Yep, you can even do discovery from this.  Your table might contain pointers to apps in a grid or distributed system, groups you want to discover and monitor or subprocesses you want to go and do further monitoring on.  This is the most complex scenario and as a tip, only really attempt this if you are looking to discover a single object out of the process per discovery.  Otherwise, use a script to process each result item in turn using ADO or some other API.

Links

MSDN Documentation – http://msdn.microsoft.com/en-us/library/ff472339.aspx

Sample of the output this module returns – http://msdn.microsoft.com/en-us/library/ee533760.aspx

Hopefully that’s given you some food for thought, and as always if you have a specific example you’d like me to walk you through, just post a comment and i’ll see what I can do!

Posted in Computing | Tagged: , , , , | 3 Comments »

How I added hundreds of Service Discoveries and Monitors into a SCOM MP in 20 minutes

Posted by Matthew on June 23, 2012

Recently I was presented by a customer with a huge list of windows services that needed to be discovered and monitored in Operations manager as part of an engagement. Many of these services were in house/custom services or ones for which no management pack currently exists.

The normal approach would of course be to put together grouped classes and discoveries that make sense for each application, however in this case time and project budget were against us, but more over the customer simply didn’t have the information (or need) to do anything other than simple up/down monitoring on each service.

So armed with a CSV file, the Visual Studio MP Authoring Extensions and a short amount of time, I set out to complete what would normally be a huge amount of work in a day.

The Solution – Snippets and Template groups

The Visual Studio MP authoring extensions have two features that used in combination allow you to take a template MP entity that you define (called a Snippit), and then by replacing placeholders with values from a table automatically generate concrete versions of your template when the MP is built (Template groups). The key here is that you can import the values into your template group from a CSV if you so wish!

This technique works for both 2007 and 2012 MPs, so you can use it for building any kind of management pack.

Before we get started however, here are two disclaimers:

This post was written using a pre-release copy of the Visual Studio MP Authoring extensions shown below are currently pre-release software.  Everything shown below could be subject to change at release.

This is not necessarily the best way to discover and monitor services. A more ideal approach would be to evaluate the services and cluster discoveries based on more than a service installation status. Consolidated discoveries would most likely be more efficient and services should only be monitored if that monitoring is useful. Having said that, anything can be created using the techniques shown here and even using this method to implement 10 items will be much faster than doing it by hand.

Steps After the jump…

Read the rest of this entry »

Posted in Computing | Tagged: , , , , | 9 Comments »

Preview – New Official System Center Operations Manager MP authoring tools

Posted by Matthew on April 30, 2012

Disclaimer: This article is based on a preview of pre-release software.  Features and information may change between the time this article was written and time of release.

At MMS this year Microsoft revealed their two new Management Pack authoring tools which will upon release replace the venerable Scom authoring console as the MS official management pack authoring tool. It’s immediately worth noting that the Authoring Console will still be available and supported, but it will not be receiving any updates and therefore will not be able to understand the new Scom MP schema.

Previously the authoring console serviced a middle ground area in terms of user MP authoring skill. Those who were brand new to MP development often found the tool confusing, and the required knowledge level too steep. This was particularly common with ITPros who were looking to create a simple MP to monitor a “standard” windows application.

However, the authoring console was also missing several capabilities that are required for complex management pack authoring scenarios. In his MMS session Baelson Duque talked about how Scom’s own management pack to monitor itself is around 37,000 lines long, and authored by multiple people. As a management pack is a single file this made development of the MP very difficult and he admitted that many of the bugs in the management pack were introduced due to merge issues and copy-paste errors when duplicating module composition.

So to quote Brian Wren “rather than having one tool to rule them all” Microsoft have decided to instead develop two different tools to address both ends of the MP authoring skills spectrum. For ITPros who are looking to create simple/common management packs, we now have the Visio Management Pack Extension. For Developers who need the power of a full development environment we have the Visual Studio 2010 Management Pack Extension.

I’m going to put up full writeups on both tools, and documentation is already available on the Technet Wikis, but for now I’m going to briefly discuss both tools and their capabilities. It’s worth noting that both of these tools are V1 and as such there are a couple of limitations with both products that Microsoft are looking to address in future updates.

Visio Management Pack Authoring Extension

  • Requirements: Visio 2010 Premium
  • Intended audience: ITPros with basic to no knowledge of management pack XML
  • Expected Release: CTP to be released within the next few weeks, with RTM to follow within a couple of months.
  • Generated Schema: Scom 2007 schema version

Features

  • Drag and drop interface
  • All classes, monitors, rules, and the relationships between objects are created by dragging stencils onto the Visio drawing and connecting them together. Templates are included for quickly standing up common app scenarios (such as a service with a reg discovery, event collection etc)
  • Smart configuration of shape data only asks for relevant params
  • Shapes have intelligent configuration fields that the author fills out using simple terms, with many more complicated settings inferred from those simple choices. Fields are hidden until their value is relevant.
  • No knowledge of discoveries required, other than discovery condition
  • The inclusion of a class shape automatically sets up a discovery under the hood, with common non-script based discoveries used. The author is simply asked to provide the discovery condition (such as a reg key path)
  • Automatically creates views
  • When classes, monitors and rules are included on the diagram, a view is automatically created for the object. By specifying the same view path as a piece of configuration data objects will be included in the same view automatically (for example, perf counters visible in the same view)
  • Creates monitors and rules with cook down automatically
  • All monitoring objects created are forced to use MS best practises including full use of cook down.
  • XML element IDs are generated automatically in a consistent, human readable notation
  • All automatically generated object IDs are set to a sensible human readable value, rather than the GUID that the Scom console uses when creating content.

Visual Studio 2010 MP Extension

  • Requirements: Visual Studio 2010 professional (higher editions and versions supported)
  • Intended Audience: Developers/ ITPros with strong MP authoring skills.
  • Expected Release: RTM within the next few weeks.
  • Generated Schema: Scom 2007 schema version by default, with Scom 2012 and Service manager projects also available.

Features

  • Management Pack Browser
    • The extension includes a graphical way of representing and browsing the contents of your MP in the management pack browser. This view is similar to the object lists you’d see in the Authoring console, and allows you to jump straight to the element definition and perform further operations.
  • Snippets and template groups
    • Template groups allow the creation of Discoveries, Rules, Monitors and many other elements using property windows, object pickers, and model dialogs.  No more XML knowledge required than the 2007 Authoring console.
    • In order to assist in the creation and completion of repetitive objects, we can now use code snippets to effectively single instance an object definition. Fields are inserted into the XML definition which are then filled out in a tabular format by the author using the snippet, and at build time Visual Studio will create all the listed elements using that snippet, inserting the field values from each item row into the MP XML.  You can even import from a CSV File.
  • Fragments
    • A series of new file types have been included in the VS extension, including MP fragment files. These essentially allow for partial definitions of Management pack XML with out-of-order elements. This allows for multiple authors to easily work on the same MP, and means that elements such as display names and knowledge can be included next to their object, rather than somewhere else in the file!
  • Intellisense
    • Visual studio continues to provide autocompletion during typing by reading the MPschema and resolving references within your management pack.
  • Skeleton samples
    • The extension includes skeletons for common MP elements, to save you having to type (and remember!) the same static code over find over.
  • Scripts as resource files
    • Rather than placing scripts directly into XML, you can now attach PS1 files and VBS scripts to a project and have them injected into script data sources! This makes testing and script update/modification much, much easier.
  • Solution and Build Options
    • Include multiple MPs in a single solution
    • You can now provide your solution with a key file, in which case all MPs in the solution will build as signed MPs.
    • If your end solution is multiple management packs (typically a library, discovery and monitoring MPs), you can include these all within a single solution and set MP dependencies. The solution will then be built in the correct order so that you don’t need to keep manually resealing your library MPs.
    • During project or solution builds not only is the XML verified to ensure it is syntactically correct but also applies (some) MP best practise rules to the project and surfaces the results along with the XML verification.
    • At build you can import them into a management group and even launch the SCOM console/Web Console!

Ok, that should be enough to wet your whistle. Look out for a write up for each tool coming shortly!

Posted in Computing | Tagged: , , , , | Leave a Comment »