.\Matthew Long

{An unsorted collection of thoughts}

The SCOM Unsung Hero – Using the System.ExpressionFilter Module

Posted by Matthew on July 3, 2012

I’ve decided to write a blogpost as tribute to the unsung hero of Operations Manager, the one module that gets used in virtually every workflow but is rarely the focus of attention.  Without this module cookdown would be mostly impossible and whether you are creating your own modules or using the Wizards in the SCOM console/ Authoring Console / Visio Extension, it’s always there to assist you.  I’m talking of course about the System.ExpressionFilter.

What is it?

The System.ExpressionFilter is a condition detection module and sibling to the System.LogicalSet.ExpressionFilter.  It’s function is to examine items in the operations manager workflow and either pass them on or remove (drop) them from the workflow.  If no items matched the filter at all, the workflow terminates.

In only has a single configuration parameter, but it’s a very powerful one, as it accepts the ExpressionType configuration.  In reality most of this article will be talking about the syntax of ExpressionType.

It’s also a very lightweight module, and should be used whenever you need to do any kind of determination or filtering.  Whenever you are using a service monitor, event log reader, SNMP probe, the parameters you are filling in are nearly all being sent to this module not the data source!

When should you use it?

  • 90% of the time, if you want to implement cookdown for your workflow, you’ll be using this module.
  • You want to add further filtering onto an existing rule in an unsealed management pack.
  • You want to perform filtering of any kind of data.
  • You are implementing a MonitorType (not the same thing as a monitor)

Configuration

The System.ExpressionFilter only takes a single parameter, of type ExpressionType.  This is an inbuilt data type in SCOM that allows you to specify some kind of evaluation criteria that operations manager will run on every item sent to the module.  It should be noted that each item will be evaluated individually (if you need to do them as a linked set, see the System.LogicalSet.ExpressionFilter).

Expression filters are very complex types.  They support nested expressions using the And and Or group constructions, and you also have access to NOT.  Below i’ll give you a sample of the type you are going to use 75% of the time..

 SimpleExpression – Compare output of PropertyBagScript to value

<Expression>
       <SimpleExpression>
              <ValueExpression>
                     <XPathQuery Type="String">Property[@Name='Status']</XPathQuery>
              </ValueExpression>
              <Operator>Equal</Operator>
              <ValueExpression>
                     <Value Type="String">Healthy</Value>
              </ValueExpression>
       </SimpleExpression>
</Expression>

This is the most common filter you’ll use in SCOM. It’s purpose is to compare the output of a module (in this case, a PropertyBagScript value called “Status”) with a value. This can either be a static value, or one passed in to the workflow as part of a $Config/$ parameter.

We start off opening with an <Expression> tag, which is always the starting and end tag for each evaluation.  Then we’ve stated on line 2 that we want to use a SimpleExpression, which just does a straight comparison between two items (the first ValueExpression and Second ValueExpression).  The valid operators for use with a SimpleExpression are:

  • Equal
  • NotEqual
  • Greater
  • Less
  • GreaterEqual
  • LessEqual

Note that when specifying the operators they are case-sensitive, so they need to be entered exactly as above.  Finally our ValueExpressions (the left and right side of the comparison) are either of type Value or XPathQuery.  You use Value when using either static values or $Config/$ or $Target/$ parameters, and XPathQuery when you want to examine the output of the previous module.

Finally you’ll note that both Value and XPathQuery have a Type attribute – SCOM will attempt to cast the data into that type before performing the query.  So if you are comparing two numbers make sure you have the type set to Integer, otherwise it will attempt to calculate if the ‘letter’ 3 is greater than ’86’, which probably isn’t your intent.  The available types are:

  • Boolean
  • Integer
  • UnsignedInteger
  • Double
  • Duration
  • DateTime
  • String

The SCOM 2007 Authoring console will by default always set the type to “String”, so keep an eye on that.  Also, if the type conversion fails, SCOM is going to throw an error into the event log and the item will not be processed.

 Logical Operators – And Or and NOT

You can group and reverse the result of expressions using the <And>, <Or> and <Not> expression elements.  How they are implemented is a wrapper for your <Expression></Expression> tag that themselves are expressions!  Sounds complicated, but with an example it becomes much clearer:

<Expression>
     <And>
          <Expression>
               <!-- First expression here -->
          </Expression>
          <Expression>
               <!-- Second expression here -->
          </Expression>
     </And>
</Expression>

So above we have two expressions that most both evaluate to true in order for the whole outer expression to be true.  The construct is the same for <Or> and <Not>, and you can even nest groups in groups for truly powerful expressions!  <Not> may only contain a single <Expression> (that of course, could be a group!), but <And> and <Or> can contain two or more expressions if you need to group on multiple items.

One important thing to note is that groups support short circuiting.  What this means is that if we examine one expression in an And/Or group and we can deduce from the first expression that the whole thing will be true or false (perhaps we are using And and the first item is False) then SCOM won’t bother to evaluate the second expression, saving time and performance.  So nest away!

Exists – Does my data item contain a property?

Much like a type conversion failure, if an XPathQuery value (as part of a SimpleExpression) doesn’t resolve to anything, say because that data item doesn’t contain an expected property, then the Expression will fail and that item will be dropped.  So if you are dealing with a property that doesn’t always show up (regardless of if it has a value, SCOM can deal with empty/null properties) you’d be wise to use the <Exists> expression.  It’s also useful if you don’t care about the value of a property, merely if it exists or not.

<Expression>
               <Exists>
                      <ValueExpression>
                             <XPathQuery Type="Integer">Params/Param[1]</XPathQuery>
                      </ValueExpression>
               </Exists>
        </Expression>

Here we are checking to see if an event log entry has at least 1 parameter.  You can also use <Value> instead of XPathQuery if you wanted to check to see if a $Config/$ parameter exists so you know if an optional parameter on your module has been specified or not.

If you need to check the result of a value that may or may not exist, you’ll want to take advantage of the short circuiting of thegroup by combining an exists check with your value check.  Make sure the exists expression is first in the group, and that way if the property doesn’t exist SCOM won’t bother trying to read the property (which, as stated above will cause the module to fail).  I’ve included an example of this below!

<Expression>
     <And>
        <Expression>
               <Exists>
                      <ValueExpression>
                             <XPathQuery Type="Integer">Params/Param[1]</XPathQuery>
                      </ValueExpression>
               </Exists>
        </Expression>
        <Expression>
               <SimpleExpression>
                      <ValueExpression>
                             <XPathQuery Type="Integer">Params/Param[1]</XPathQuery>
                      </ValueExpression>
                      <Operator>Less</Operator>
                      <ValueExpression>
                             <Value Type="Integer">Params/Params[1]</Value>
                      </ValueExpression>
               </SimpleExpression>
        </Expression>
     </And>
</Expression>

Regular Expressions

If you want to do powerful (or simple!) regular expression comparisons, then the ExpressionFilter has got you covered.  I’m not going to go into a huge amount of depth on this one, because by now you should be getting an idea of how this works.  I’ll just show you the syntax and then list the regex pattern styles you can use.

<Expression>
       <RegExExpression>
              <ValueExpression>
                     <XPathQuery Type="String">EventPublisher</XPathQuery>
              </ValueExpression>
              <Operator>ContainsSubstring</Operator>
              <Pattern>Microsoft</Pattern>
       </RegExExpression>
</Expression>

ValueExpression is the same as with a SimpleExpression, so you can compare against incoming data items on the workflow or input parameters.  Operator allows you to specify what type of matching you’d like to perform:

  • MatchesWildcard– Simple wildcard matching using the below wildcards
    • # – Matches 0-9
    • ?  – Any single character
    • * – any sequence of characters
    • \ – escapes any of the above
  • ContainsSubstring – Standard wildcard containment, if this exists anywhere in the string (implemented as ‘^.*pattern.*$’)
  • MatchesRegularExpression – Full regular expression via .Net (Note this is not the same as Group Calculation modules, which use Perl).
  • DoesNotMatchWildcard – Inverse of MatchesWildcard.
  • DoesNotContainSubstring – Inverse of ContainsSubstring
  • DoesNotMatchRegularExpression – Inverse of DoesNotMatchRegularExpression

Finally, Pattern allows you to specify your regular expression.  Note that you don’t need to wrap this in “” or ”.  Obviously you can nest these in groups if you need to perform multiple regular expressions or

Oh, and if you have any questions on Regular expressions, ask Pete Zerger.  He loves regular expressions (you can tell him I sent you ;))!

DayTimeExpression

Finally we have , which is used to determine if a datetime is inside or outside a range.  This one is less used, as we have another built in module (System.ScheduleFilter) which we can use for this kind of comparison that is a bit more powerful, and can use the current time of the workflow, rather than having to get that value from your data item.  It only allows for Day (Sun-Saturday) and time comparisons.  There’s no ability to specify exceptions or different windows for each day either, something the ScheduleFilter does implement.

I’m not going to detail into it here, but you can find documentation for it on MSDN at the following link.

Example Scenarios

Essentially, any time you want to filter or compare a value you can use this module!  Normally you’ll be using it to either manage output from a datasource or further scope a rule so that it only alerts when it meets your extra criteria.

The other time you’ll commonly use it is when implementing your own MonitorType.  You’ll add one System.ExpressionFilter for each health state the monitortype provides, and then set the filters up so that they use mutually exclusive values to determine what health state your system is in.  I won’t drag this post out any further with examples however, as there are plenty on the web of this already and they are always quite specific to the scenario.

Links

MSDN documentation – http://msdn.microsoft.com/en-us/library/ee692979.aspx

Hope this proved helpful, and as always if you have any specific questions feel free to post a comment with what you need and i’ll see what I can do!

(Sorry Pete!)

Advertisements

Posted in Computing | Tagged: , , , , | 4 Comments »

Query a database without scripting as part of SCOM monitoring – The System.OLEDBProbe module

Posted by Matthew on June 23, 2012

A fairly common monitoring scenario is the need to query a database somewhere (normally SQL, but as long as you have a relevant OLEDB driver on your agents, whatever you need!) and based on the results of the query trigger some kind of workflow. I’ve seen it’s used with monitors, Alert and collection rules and even Discoveries!

Obviously you can do this via script, but perhaps you have a simple query and no need to do any posts query processing (often this can be done as part of your query anyway). In these cases, you can also use a built in module called the System.OLEDBProbe to query the DB and do the lifting for you!

What is it

The System. Module is a built in probe module that will use a OLEDB provider/driver on the system to make a database query from the hosting agent. The database, query and other settings are defined via probe configuration and do not need to be hard coded into the MP (though obviously the query usually is). The query can be modified using context parameter replacement prior to execution so you can dynamically insert information into it if need be. It supports integrated and also manually specified credentials, usually via Run As Profiles.

It also has the nifty ability to retrieve the database settings from specified registry keys, which can avoid the need to go out and discover those pieces of information. This makes it quite suitable for attaching onto existing classes from other management packs.

When you should use it

  • You know in advance which columns you need to access.
  • You know how to implement your own module.
  • You have a suitable OLEDB provider on your agent (common windows ones included by default)
  • You don’t need to perform complex processing on each returned row.

Configuration

Required

  • ConnectionString – The connection string you wish to use to connect to the database.  On windows 2003 or later, this is encrypted by the module.  if you are using Integrated security, you do not need to specify credentials as long as you are using a run as profile with this module (but make sure you flag the connection as using integrated security!).
  • Query – The query you wish to run against the database. Supports context parameter replacement, so you can use $Config/$ variables etc in your query.

Optional

  • GetValue – (true/false) Whether the results of the query should be returned or not (set to false if you just want to connect to the DB, and you don’t care about the results of the query).
  • IncludeOriginalItem – (true/false) Determines if the resulting data item(s) will contain the item that originally triggered this module.  Note that the data is returned as CData, so you won’t be able to perform XPath queries directly against it.
  • OneRowPerItem – (true/false) Should all resulting data be returned in a single data item, or 1 data item returned for each row in the query results?  Normally setting this to true is more useful, as you’ll often want a condition detection to process each row individually, and you won’t know the order (or number) of resulting rows.
  • DatabaseNameRegLocation – Registry key where we can find the database name.  Must be under the HKLM hive.
  • DatabaseServerNameRegLocation – Registry key where we can find the database server name (and instance, if required).  Must also be under the HKLM hive.

SCOM 2007 R2 and above only

  • QueryTimeout – (Integer) Optional parameter that allows you to perform a query timeout.
  • GetFetchTime – (true/false) Optional parameter that allows you to specify that the resulting data item(s) should contain the fetch time for the query.

Personally, I tend to omit the R2 only parameters as they usually do not add much to the workflow and will restrict your environment.  Obviously if you are making this MP for inhouse resources you are free to implement against whatever version of SCOM you have!

An important parameter is the OneRowPerItem.  If set to false when you get back data the data item will look like the below snippit (i’ve omitted the other elements to save space)


<RowLength></RowLength>
    <Columns>
    <!-- Data for first row returned -->
       <Column>Data in first column</Column>
       <Column>Data in Second column.</Column>
    </Columns>
    <Columns>
    <!-- Data for Second row returned -->
       <Column>Data in first column</Column>
       <Column>Data in Second column.</Column>
    </Columns>

This can make processing the results in further modules a pain, since your XPath Query is going to have to specify which row and column specifically you want to access. If you instead set OneRowPerItem to true then you’ll get multiple return items and can filter them using an Expression filter with a simple syntax such as $Data/Columns/Column[1]$ You may also wish to filter on the RowLength property to establish if any rows were returned. Remember that the module will return a data item if it succeeded to connect but doesn’t have rights to query, so check that data was returned before you try to do something with it!

Example scenarios

Normally if I’m going to use an OleDBProbe to access a database repeatedly I’ll create my own probe module that sets up the settings I’m going to need and is already set to use my MP’s run as profile for DB access.  That way I don’t have to keep specifying it over and over again.  Below is a sample where I’ve done this, and configured everything other than my query to pass in for a SQL database probe.  Now all my monitors and rules that make use of this know where to locate the DB and what other query options to use automatically (along with credentials).

<ProbeActionModuleType ID="DBProbe.Library.Probe.DatabaseOledbQuery" Accessibility="Public"   RunAs="DbProbe.Library.SecureReference.Database" Batching="false" PassThrough="false">
    <Configuration>
<xsd:element minOccurs="1" name="Query" type="xsd:string" />
<xsd:element minOccurs="1" name="OneRowPerItem" type="xsd:boolean" />
    </Configuration>
<ModuleImplementation Isolation="Any">
        <Composite>
            <MemberModules>
<ProbeAction ID="PassThru" TypeID="System!System.PassThroughProbe" />
<ProbeAction ID="OledbProbe" TypeID="System!System.OleDbProbe">
                    <ConnectionString>Provider=SQLOLEDB;Integrated Security=SSPI </ConnectionString>
$Config/Query$
                    <GetValue>true</GetValue>
                    <IncludeOriginalItem>false</IncludeOriginalItem>
$Config/OneRowPerItem$
                    <DatabaseNameRegLocation>SOFTWARE\MyRegKey\Database\DatabaseName</DatabaseNameRegLocation>
                    <DatabaseServerNameRegLocation>SOFTWARE\MyRegKey\Database\DatabaseServerName</DatabaseServerNameRegLocation>
ProbeAction>
            </MemberModules>
            <Composition>
                <Node ID="OledbProbe">
                    <Node ID="PassThru" />
                </Node>
            </Composition>
        </Composite>
ModuleImplementation>
    <OutputType>System!System.OleDbData</OutputType>
    <TriggerOnly>true</TriggerOnly>
ProbeActionModuleType>

Here I’ve done the same thing, only without using registry keys to specify the location of my DB.  Normally I’d pass the DB details from my targeted class as I’ll have some property that has been discovered defining where the database is.

<ProbeActionModuleType ID="DBProbe.Library.Probe.DatabaseOledbQuery" Accessibility="Public"  RunAs="DbProbe.Library.SecureReference.Database" Batching="false" PassThrough="false">
    <Configuration>
<xsd:element minOccurs="1" name="DatabaseServer" type="xsd:string" />
DatabaseName" type="xsd:string" />
        <xsd:element minOccurs="1" name="Query" type="xsd:string" />
        <xsd:element minOccurs="1" name="OneRowPerItem" type="xsd:boolean" />
    </Configuration>
    <ModuleImplementation Isolation="Any">
        <Composite>
            <MemberModules>
<ProbeAction ID="PassThru" TypeID="System!System.PassThroughProbe" />
            <ProbeAction ID="OledbProbe" TypeID="System!System.OleDbProbe">
Provider=SQLOLEDB;Server=$Config/DatabaseServer$;Database=$Config/DatabaseName$;Integrated Security=SSPI
                <Query>$Config/Query$</Query>
                <GetValue>true</GetValue>
                <IncludeOriginalItem>false</IncludeOriginalItem>
                <OneRowPerItem>$Config/OneRowPerItem$</OneRowPerItem>
            </ProbeAction>
            </MemberModules>
            <Composition>
                <Node ID="OledbProbe">
                    <Node ID="PassThru" />
                </Node>
            </Composition>
        </Composite>
    </ModuleImplementation>
    <OutputType>System!System.OleDbData</OutputType>
    <TriggerOnly>true</TriggerOnly>
</ProbeActionModuleType>

Simple/Specified Authentication

If you don’t (or can’t) want to use Integrated security, you can pass credentials using simple authentication and a run as profile. DO NOT hard code the credentials, these are now stored in plain text and readable. The run as profile creds are encrypted and the connection string is encrypted across the wire, the MP isn’t!

The syntax for this is (depending on your Ole provider, here it’s SQL) shown below.  Obviously replace the text in italics with your values.

Provider=SQLOLEDB;Server=ServerName;Database=databaseName; User Id=$RunAs[Name=”RunAsIdentifierGoesHere“]/UserName$;Password=$RunAs[Name=”RunAsIdentiferGoesHere“]/Password$

Scenario 1 – Monitoring

Fairly simple one this, you want to monitor a database for a certain condition.  Perhaps you are getting the result of a stored procedure, checking the number of rows in a table (by using the databases query langauge) or checking rows for a certain value (perhaps error logs?).  Once queried, you pass the data items onto an System.ExpressionFilter module to filter for your desired criteria and alert as appropriate.

Scenario 2 – Collection

Another fairly common one, do the same thing as above as part of an event collection or performance collection rule.  This could even be ignoring the data and just checking how long it took the query to run, via the InitializationTime, OpenTime, ExecutionTime and FetchTime (if you’re R2 or 2012) properties of the output data.  Following your System.OleDBProbe module you’ll usually use one of the mapper condition detection modules to generate event or performance data (these are quite nicely documented around the web and on MSDN.  Normally done with property bags, but the principle is the same).

Scenario 3 – Discovery

Yep, you can even do discovery from this.  Your table might contain pointers to apps in a grid or distributed system, groups you want to discover and monitor or subprocesses you want to go and do further monitoring on.  This is the most complex scenario and as a tip, only really attempt this if you are looking to discover a single object out of the process per discovery.  Otherwise, use a script to process each result item in turn using ADO or some other API.

Links

MSDN Documentation – http://msdn.microsoft.com/en-us/library/ff472339.aspx

Sample of the output this module returns – http://msdn.microsoft.com/en-us/library/ee533760.aspx

Hopefully that’s given you some food for thought, and as always if you have a specific example you’d like me to walk you through, just post a comment and i’ll see what I can do!

Posted in Computing | Tagged: , , , , | 3 Comments »

How I added hundreds of Service Discoveries and Monitors into a SCOM MP in 20 minutes

Posted by Matthew on June 23, 2012

Recently I was presented by a customer with a huge list of windows services that needed to be discovered and monitored in Operations manager as part of an engagement. Many of these services were in house/custom services or ones for which no management pack currently exists.

The normal approach would of course be to put together grouped classes and discoveries that make sense for each application, however in this case time and project budget were against us, but more over the customer simply didn’t have the information (or need) to do anything other than simple up/down monitoring on each service.

So armed with a CSV file, the Visual Studio MP Authoring Extensions and a short amount of time, I set out to complete what would normally be a huge amount of work in a day.

The Solution – Snippets and Template groups

The Visual Studio MP authoring extensions have two features that used in combination allow you to take a template MP entity that you define (called a Snippit), and then by replacing placeholders with values from a table automatically generate concrete versions of your template when the MP is built (Template groups). The key here is that you can import the values into your template group from a CSV if you so wish!

This technique works for both 2007 and 2012 MPs, so you can use it for building any kind of management pack.

Before we get started however, here are two disclaimers:

This post was written using a pre-release copy of the Visual Studio MP Authoring extensions shown below are currently pre-release software.  Everything shown below could be subject to change at release.

This is not necessarily the best way to discover and monitor services. A more ideal approach would be to evaluate the services and cluster discoveries based on more than a service installation status. Consolidated discoveries would most likely be more efficient and services should only be monitored if that monitoring is useful. Having said that, anything can be created using the techniques shown here and even using this method to implement 10 items will be much faster than doing it by hand.

Steps After the jump…

Read the rest of this entry »

Posted in Computing | Tagged: , , , , | 9 Comments »

Using the System.LogicalSet.ExpressionFilter in SCOM Management Packs

Posted by Matthew on June 14, 2012

What is it?

The System.LogicalSet.ExpressionFilter is a Condition Detection module that functions in a similar fashion to the System.ExpressionFilter module, except it allows you to evaluate multiple data items (usually Property Bags) as a group and then act based if any (or all) of the objects in the group match your criteria.

What is it used for?

You can use this module wherever you have a group of data items that you need to act upon only if all (or any number) meet a certain criteria.  Note that you can’t specify how many items must match, only that either all of them should match or at least 1.  If you need to have access to that kind of filtering, you need to use a Consolidator.

Some common examples include processing health state information for multiple performance counters, viewing the execution history of jobs or scheduled tasks, or checking the health of multiple components at once.  You can also use this to replace scripts that often check lots of criteria and produce a single healthy/unhealthy status code based on the evaluation of all of the criteria, which may open further opportunities to make use of cookdown between your workflows.  I’ll give an example of this below in the example Scenarios.

When using this in monitoring workflows, you will most likely be using the System.LogicalSet.ExpressionFilter for the Healthy health state (all items don’t match your unhealthy criteria) and then regular ExpressionFilters for your unhealthy state(s).

Configuration

Essentially, the configuration for this module is exactly the same as an ExpressionFilter, except you have two new Attributes.

EmptySet allows you to control what happens if no data items are provided to the module in the workflow

  • Passthrough – carries on the workflow to the next module.
  • Block- terminate the workflow at this module.

SetEvaluation allows you to specify if the group of data items should be passed on to the next workflow if

  • Any – If at least one item matches the expression filter, pass on all items.
  • All  – Only pass on items if ALL items match the expression filter.

It’s also worth noting, that since all data items that are passed along the workflow to output will be returned as Alert Context, this can make it very helpful when correlating items (often it’s not the matching log line you need, but the ones before and after that help you troubleshoot the problem!).  Just set the SetEvalaution attribute to Any.

Example Scenarios

Job Execution History / Previous Events Monitor

In this scenario, we are looking at multiple Property Bags that each describe a Job instance or perhaps an Event Log Entry every 10 minutes.  We only want to return a healthy state if none of the instances/events match unhealthy criteria.  If one or more do, then we want to flag an unhealthy state.  I’ve configured the below example as if i was receiving a property bag with job data, but you could substitute that for event codes, or really anything else that may appear in your data items.

Healthy State Module : System.LogicalSet.ExpressionFilter

  • Expression:
<Expression>
   <SimpleExpression>
             <ValueExpression>
<XPathQuery Type="String">Property[@Name='JobStatus']
             </ValueExpression>
             <Operator>Equal</Operator>
             <ValueExpression>
               <Value Type="String">Success</Value>
             </ValueExpression>
   </SimpleExpression>
</Expression>
  • EmptySet: PassThrough
  • SetEvaluation: All

Unhealthy State Module: System.ExpressionFilter

  • Expression:
<Expression>
    <SimpleExpression>
              <ValueExpression>
                <XPathQuery Type="String">Property[@Name='JobStatus']</XPathQuery>
              </ValueExpression>
              <Operator>NotEqual</Operator>
              <ValueExpression>
                <Value Type="String">Success</Value>
              </ValueExpression>
    </SimpleExpression>
 </Expression>
 

Replacing multiple Check logic in Scripts

The idea here is that we may have a script-based datasource that checks multiple aspects of an object to determine if it is healthy.  The script then has some internal logic (usually via If checks and And/Or expressions)  to sum up all of the checks and provide a health status based on the sum of all of the checks.  This is all very well and good, but what if we need to create a second monitor/rule/diagnostic that only depends on one (or a selection) of these checks?  We are going to have to implement another script (using most likely 90% of the same code) to provide that data, and now manage edits between the two scripts to keep them up to date.

Instead, what we could do is provide multiple property bags from our script, each one providing the pass/fail status of each check individually.  Then using System.LogicalSet.ExpressionFilters for our Healthy Condition and regular System.ExpressionFilters for our unhealthy condition(s) we can detect if any check failed.  If we want to raise unhealthy conditions only if ALL checks failed, then you just swap the two around, using System.LogicalSet.ExpressionFilters for your unhealthy states.

Multi-Instance Perfmon Counters

Essentially the idea here is that you want to monitor a performance counter with multiple instances, and you want to be alerted if any one of the instances (or perhaps all) trigger a certain threshold.  There is a great example of this already written up at the Operations Manager team blog, which i’ve linkced to below.

Checking Returned Rows from a Database

This one is pretty simple, using an System.OleDBProbe you can retrieve rows based on a Query from a database, and then check the rowset to see if any/all rows match your criteria.  Just make sure you configure the OleDBProbe to return a data item for each row, rather than all in one item!

Links

Hope that helps someone out.  If you have any specific examples you’d like me to walk through, just post a comment and I’ll see what I can do!

Posted in Computing | Tagged: , , , | 3 Comments »

Preview – New Official System Center Operations Manager MP authoring tools

Posted by Matthew on April 30, 2012

Disclaimer: This article is based on a preview of pre-release software.  Features and information may change between the time this article was written and time of release.

At MMS this year Microsoft revealed their two new Management Pack authoring tools which will upon release replace the venerable Scom authoring console as the MS official management pack authoring tool. It’s immediately worth noting that the Authoring Console will still be available and supported, but it will not be receiving any updates and therefore will not be able to understand the new Scom MP schema.

Previously the authoring console serviced a middle ground area in terms of user MP authoring skill. Those who were brand new to MP development often found the tool confusing, and the required knowledge level too steep. This was particularly common with ITPros who were looking to create a simple MP to monitor a “standard” windows application.

However, the authoring console was also missing several capabilities that are required for complex management pack authoring scenarios. In his MMS session Baelson Duque talked about how Scom’s own management pack to monitor itself is around 37,000 lines long, and authored by multiple people. As a management pack is a single file this made development of the MP very difficult and he admitted that many of the bugs in the management pack were introduced due to merge issues and copy-paste errors when duplicating module composition.

So to quote Brian Wren “rather than having one tool to rule them all” Microsoft have decided to instead develop two different tools to address both ends of the MP authoring skills spectrum. For ITPros who are looking to create simple/common management packs, we now have the Visio Management Pack Extension. For Developers who need the power of a full development environment we have the Visual Studio 2010 Management Pack Extension.

I’m going to put up full writeups on both tools, and documentation is already available on the Technet Wikis, but for now I’m going to briefly discuss both tools and their capabilities. It’s worth noting that both of these tools are V1 and as such there are a couple of limitations with both products that Microsoft are looking to address in future updates.

Visio Management Pack Authoring Extension

  • Requirements: Visio 2010 Premium
  • Intended audience: ITPros with basic to no knowledge of management pack XML
  • Expected Release: CTP to be released within the next few weeks, with RTM to follow within a couple of months.
  • Generated Schema: Scom 2007 schema version

Features

  • Drag and drop interface
  • All classes, monitors, rules, and the relationships between objects are created by dragging stencils onto the Visio drawing and connecting them together. Templates are included for quickly standing up common app scenarios (such as a service with a reg discovery, event collection etc)
  • Smart configuration of shape data only asks for relevant params
  • Shapes have intelligent configuration fields that the author fills out using simple terms, with many more complicated settings inferred from those simple choices. Fields are hidden until their value is relevant.
  • No knowledge of discoveries required, other than discovery condition
  • The inclusion of a class shape automatically sets up a discovery under the hood, with common non-script based discoveries used. The author is simply asked to provide the discovery condition (such as a reg key path)
  • Automatically creates views
  • When classes, monitors and rules are included on the diagram, a view is automatically created for the object. By specifying the same view path as a piece of configuration data objects will be included in the same view automatically (for example, perf counters visible in the same view)
  • Creates monitors and rules with cook down automatically
  • All monitoring objects created are forced to use MS best practises including full use of cook down.
  • XML element IDs are generated automatically in a consistent, human readable notation
  • All automatically generated object IDs are set to a sensible human readable value, rather than the GUID that the Scom console uses when creating content.

Visual Studio 2010 MP Extension

  • Requirements: Visual Studio 2010 professional (higher editions and versions supported)
  • Intended Audience: Developers/ ITPros with strong MP authoring skills.
  • Expected Release: RTM within the next few weeks.
  • Generated Schema: Scom 2007 schema version by default, with Scom 2012 and Service manager projects also available.

Features

  • Management Pack Browser
    • The extension includes a graphical way of representing and browsing the contents of your MP in the management pack browser. This view is similar to the object lists you’d see in the Authoring console, and allows you to jump straight to the element definition and perform further operations.
  • Snippets and template groups
    • Template groups allow the creation of Discoveries, Rules, Monitors and many other elements using property windows, object pickers, and model dialogs.  No more XML knowledge required than the 2007 Authoring console.
    • In order to assist in the creation and completion of repetitive objects, we can now use code snippets to effectively single instance an object definition. Fields are inserted into the XML definition which are then filled out in a tabular format by the author using the snippet, and at build time Visual Studio will create all the listed elements using that snippet, inserting the field values from each item row into the MP XML.  You can even import from a CSV File.
  • Fragments
    • A series of new file types have been included in the VS extension, including MP fragment files. These essentially allow for partial definitions of Management pack XML with out-of-order elements. This allows for multiple authors to easily work on the same MP, and means that elements such as display names and knowledge can be included next to their object, rather than somewhere else in the file!
  • Intellisense
    • Visual studio continues to provide autocompletion during typing by reading the MPschema and resolving references within your management pack.
  • Skeleton samples
    • The extension includes skeletons for common MP elements, to save you having to type (and remember!) the same static code over find over.
  • Scripts as resource files
    • Rather than placing scripts directly into XML, you can now attach PS1 files and VBS scripts to a project and have them injected into script data sources! This makes testing and script update/modification much, much easier.
  • Solution and Build Options
    • Include multiple MPs in a single solution
    • You can now provide your solution with a key file, in which case all MPs in the solution will build as signed MPs.
    • If your end solution is multiple management packs (typically a library, discovery and monitoring MPs), you can include these all within a single solution and set MP dependencies. The solution will then be built in the correct order so that you don’t need to keep manually resealing your library MPs.
    • During project or solution builds not only is the XML verified to ensure it is syntactically correct but also applies (some) MP best practise rules to the project and surfaces the results along with the XML verification.
    • At build you can import them into a management group and even launch the SCOM console/Web Console!

Ok, that should be enough to wet your whistle. Look out for a write up for each tool coming shortly!

Posted in Computing | Tagged: , , , , | Leave a Comment »

Designing Operations Manager management pack Discovery models to best support derived classes.

Posted by Matthew on March 6, 2012

Inspired by a previous post and some of the recent activities of friends and colleagues, I thought i’d share some recommendations on an important consideration you should keep in mind when designing the discovery model of your System Center Operations Manager 2007/2012 management pack.  This blog post should help you author your MPs in such a way that they are much friendlier to future expansion/customisation, whether by you or your customers.

Class Inheritance

As you may or may not be aware, in Operations manager objects (known as classes) can be “extended” in the GUI to support additional custom attributes that may be useful to your organisation.  You may have also noticed that many management packs implement a common object for a technology (such as an IIS Website or a SQL Database) that is then further specialised into every version you might encounter (IIS 6, IIS 7 etc).  You can construct a view or group that displays a version specific object (IIS2008Website) or any object that is based on that common ancestor (IISWebsite) which will then be version agnostic and display all IIS websites regardless of version.

This all happens because of class inheritance.  Every class in Operations Manager has a single Base class from which it stems, going all the way up to the common ancestor for all classes, System.Entity.  Every class inherits all class properties from its parent class (and any properties of their base class, known as ancestor classes) all the way up the chain to System.Entity.  This is why every object in SCOM has a Display Name property, because System.Entity defines it and every object is sooner or later based off of that class.

Operations Manager automatically considers any object to be an instance of itself but also an instance of any of its base classes.  So a SQL 2008 instance is itself a SQL Instance which is also a Windows Server role.  When you extend a class in the GUI (say, Windows Computer) what you are actually doing is deriving a new custom class that is based upon that parent class with your custom properties.

Discoveries and Targeting

Ok,  so with that out-of-the-way, why would you want to extend a class (or if you are authoring an MP, derive from an existing class)? Well reasons may include:

  1. Adding an attribute (Class property) that would benefit your organisation, such as Owner Contact details for a Server.
  2. Providing support for a version of an object that wasn’t previously included by the MP vendor/Author
  3. Speeding up creation time of your own MP by removing the necessity to define common properties over and over again.
  4. Allow targeting of monitoring workflow, Relationships and views at a group of objects, regardless of their specific version or implementation.

That last one is a critical point for MP Authors, as if I use the Server Role class as my base class when creating my server object, it automatically inherits all the relationships that will insert it into the Health rollup model for a Computer object.

Right enough rambling, now to the crux of the matter.  When you derive or extend an existing class SCOM may automatically give it all the property definitions of its parent classes but it doesn’t get the values of those properties automatically.  If you decide to go and make a SQL 2012 MP unless the discoveries for the existing objects have been setup in a certain way, all the inherited properties such as Database Name will be blank and it will be up to you to implement a discovery for them.

This is because discoveries are usually targeted either at the component that hosts them (Server role discoveries are usually targeted at the computer that runs them, database discoveries at the DB engine that hosts them) and they create the object with all of it’s properties discovered.  When you extend a class or derive a new one, they have no idea that your new class exists so they just leave it be.

The better option here (see caveats and considerations below) is to target a discovery at the component that hosts your class and a discovery at the class itself to discover its properties.  That way when you or someone else derives your class into a new version, your expertise at finding and populating the original properties is put into work because the discovery targeting sees the new class definition as being an instance of the base class that it was designed to populate.

Sample discovery model

No doubt some of you have taken issue with my claim because in your experience deriving or extending a class does automatically populate all the existing properties with values.  More than likely, this is because you’ve worked with a discovery configuration like I’ve described above without knowing it, such as with the Windows Computer objects.  There are a series of Discoveries targeted at Microsoft.Windows.Computer in the core operations manager MPs that are responsible for discovering properties such as logical CPU count and whether or not the computer is actually a Virtual (Cluster) instance.  Since pictures are better than words (and I’m not far off one thousand already) here is a diagram that explains what I’m talking about.

Default Implementation of Microsoft.Windows.Computer discovery model

The diagram above actually rolls several different property discoveries into one object, but hopefully you get the idea.  Now if I were to extend (via the SCOM GUI) or derive (MP authoring) the Computer class with my own custom version containing a new property, I would only be responsible for discovering the existence of my class and any new custom properties I’d added.  Indeed, if I was following this approach here i’d implement my property discovery as a second discovery so that any class that extends or derives from MY class in the future also benefits from this.

The diagram below now shows this; I have added a discovery (which perhaps targets the computer and looks for the presence of a “System Owner” registry key) that is responsible for creating my custom class, and another which discovers my custom attributes I’ve added (reads the above reg key and populates that value onto the object).  It might look like a lot of work in the diagram but honestly in the authoring console this is very simple to do.

Custom Attribute extension and discovery

For those of you wondering how to make a discovery submit property information for an existing object, it’s extremely simple.  Just discover your class with its key properties again (you don’t need to re-test for your object, you know it exists already), along with all your newly found properties, and reference counting (something I talked about in my previous blog post) will take care of the rest.  Don’t worry about blanking out any existing properties that you don’t include, SCOM will leave those intact.  During un-discovery SCOM is also smart enough to handle your self referential discovery and make sure the object isn’t perpetually discovered once your component ceases to exist.

As an added bonus, implementing the discovery model in this fashion also allows you to separate out the discovery interval of your class from the discovery interval of your properties.  This can help reduce/prevent discovery churn and will allow your MP users to further customize their monitoring experience.

Caveats and Considerations

As always it seems there are some considerations you should take into account before doing this.  The first and most important is performance.  Whilst generally speaking performing two sets of queries (one to discover the object, one to create its properties) isn’t that taxing on most data sources, you might want to think twice about this if you are using a remote datasource that isn’t very well optimised.  Most of the time if you are doing WMI or SQL queries remote remember that your second query will usually be much cheaper since rather than looking for a set of matching criteria you are only looking for records that match your objects ID.  Likewise your first query to establish the existence of the object can be optimized not to request columns/properties that only the second discovery needs.

As I mentioned above, if performance is a concern you can control the intervals of your two discoveries and set them to something suitable.  Remember this is discovery not monitoring, you don’t need to update properties every 10 minutes.

The second consideration you may want to take into account is complexity.  If you are implementing a management pack with dozens of objects using custom datasources you may not want to implement an extra set of discoveries, especially if your objects only have a handful of properties.  That’s fine, you just have to make sure you balance the demands with the rewards of taking the above model on board.  If you don’t see yourself deriving lots of classes, or your customer’s wanting to extend your classes with your support, then you’re just saving yourself unnecessary effort.

In my opinion though, it’s nearly always worth it.  Feel free to leave a comment if you’d like to see a specifc example of this kind of thing implemented (either using one of the SCOM built in modules, or a custom script).

Posted in Computing | Tagged: , , , | Leave a Comment »

Scripting Relationship Discovery in Operations Manager 2007 / 2012

Posted by Matthew on March 3, 2012

One of the things I’ve found myself doing for several recent customer management packs is the discovery of relationships between objects belonging to 3rd party management packs and in-house components.  In simpler terms, often linking an application to the SQL databases it relies on automatically.  This is most useful because now you don’t have to create Distributed Applications by hand for every application to build up a dependency health model, and you don’t have to replicate all the knowledge contained inside the SQL management packs.

The technique itself is actually fairly straightforward, but when trying to gather the necessary knowledge I found it all over the place and often amongst busy, confusing articles.  So here is the concise knowledge you need to implement a relationship discovery of a 3rd party object inside a VBScript discovery.

*Update* Ok so in hindsight it wasn’t that concise.  It is, at least, comprehensive.

Limitations

The first thing you need to be aware of before trying to create relationships are some of the constraints you have to work within with Operations Manager.

Rule 1 : Hosting location

If either your source or target object is hosted on an operations manager agent, then you are constrained as to the type of relationships you can create.  You can only create a relationship to perform health rollup between objects only if:

  1. Both objects are hosted on the same agent (i.e two server roles)
  2. One object is hosted on an agent, and the other is not hosted (or the object is hosted by a non hosted object).  Essentially the second object must exist on the RMS /Management servers.
  3. Neither object is hosted (both on the RMS/MS)

You can check if an object is hosted by looking at its class definition and seeing if it has the “Hosted” attribute set to “true”.

In one case, where we had a hosted application on one server and SQL and IIS components hosted on another, what this meant was having to create a non-hosted version of the application (discovered with the local hosted copy), which then had relationships to its local application object and components.  As far as the users were concerned, the non-hosted object was the instance in SCOM of their application.

The diagram below illustrates a scenario where an application has a dependency on a (already discovered and managed) SQL database.  Here we want to create a Containment relationship and rollup health monitoring of the database to the local application.  The diagram shows the two valid scenarios where you can create a relationship and the invalid (and unfortunately, most common one).

Rule 2: Source and target existence in SCOM

In order for Operations Manager to accept the relationship inside your discovery data that is returned by the discovery script, both the source and target of the relationship must either already exist, or be defined in the discovery data.  This is often fine for the source, since we have just discovered our object so we know that exists.

The issue can often be with the target, because if we attempt to define a relationship to an object assuming it already exists in SCOM and it doesn’t (say because an agent is not installed on that host, or the discovery for the object is disabled) then all discovery data from that discovery instance is discarded by SCOM with no alert in the console!  I can’t explain why this occurs, this is just my experience.  If anyone reading this knows why this occurs, feel free to get in touch!

So if your discovery were to discover your application, and 5 component relationships and only 1 of those objects doesn’t actually exist, nothing will be discovered!

What this means practically is you have two options, both of which have downsides..

  1. Discover both the source and target in your discovery, to ensure that both objects exist.
  2. Discover the source object, then have a second discovery targeting your object that creates the relationship.

The first option means that you know your objects exist, and you don’t have to actually rediscover every property since if you only discover the Key properties of the object SCOM knows it’s the same object as the pre-existing and works its magic in the background using Reference Counting to merge them.  Reference counting means that if two objects have identical keys (and in the case of hosted objects, identical parent object keys) SCOM assumes they are the same object, and simply increments the reference count for the object.  As long as the reference count remains above zero then the object continues to exist, with all discovered properties.

The first downside to this approach is that if the two objects are hosted on different agents, you will need to enable Proxy discovery on the agent running your discovery.  This is because hosted applications ultimately specify the Computer they are hosted upon, and you are now creating objects for another computer (even though it’s just a reference count).  The second downside is that when the 3rd party object (such as a SQL DB) is undiscovered by its original discovery, the object will live on in SCOM as long as you are still discovering it because the reference count is still 1, even if that object truly no longer exists (which could confuse the hell out of the SQL admins when they see a database still exists on an old server post DB migration).  It will also be missing any properties that the original discovery populated which you didn’t, which may cause all sorts of alerts to start appearing in the SCOM console when values expected by monitors are unpopulated.

So, the second option (only discover your source object, and have a second discovery to create the relationship) has the advantage that if your 3rd party object doesn’t exist, you won’t have a “ghost” object left in SCOM, but your application will exist because it was already discovered as part of a different workflow.  You also now aren’t responsible for reverse engineering how your target object (and any properties) were discovered.  The downside here is that any other objects found in your second discovery still will not be present.  So if you had an app that had SQL relationships found in one discovery and IIS websites found in another, and one IIS website was missing, all the IIS websites would not be present, but your app and SQL DBs would be.

There isn’t really a right answer, you’ve got to answer this one individually with your organisation and/or customer and determine which behaviour is better.  My gut preference is when writing MPs for customers is to use the second option, and to have a property on my object that states how many component objects there are supposed to be.  That way we don’t generate false alerts or weird behaviour, and application owners can still see when an object is missing.  When writing an MP for yourself, you can ensure that all agents hosting components get agents and that everything is discovered properly.

Rule 3:  Identifying your objects

As we will see in a moment (we’ll get to some code soon, promise!) in order to create a relationship you need to be able to uniquely identify the source and target objects.  Your source object is probably your own object you’ve just discovered (or the target of your relationship discovery if you are using two discoveries, either way you know which object you want).

Unfortunately unless you get fancy with the SCOM SDK and Powershell discoveries, the only way to specify another object outside of your discovery is to recreate it in your discovery script with the same key properties.  In order to use the SDK, you’ve got to ensure that your machine running the discovery has the SCOM SDK libs availible, which is a pretty rare occurance outside of management servers.

You don’t have to actually submit the object in your discovery results (depending on what you are doing with Rule 2, of course), but you need to be able to specify all key properties of the target object, and if it is hosted, it’s parent object’s key properties.  If the hosting parent doesn’t have a key, then specify its parent’s keys.  Eventually every hosted object has a parent with keys because you’ll hit a Windows.Computer or Unix.Computer object.

So for a working example: SQL Databases.  In order to uniquely identify a SQL DB you’ll need to know:

  1. It’s key property, the Database Name.
  2. It’s hosting object’s key property, the SQL Instance name.
  3. SQL Databases are hosted, and therefore the principal name of the agent computer these objects are hosted on (either the local machine’s principal name, or the remote machines principal name if the DB is on a remote SQL server).

Sounds simple, but what about objects that don’t have a key?  Well since there is no key, there must only ever be one instance on that host so that’s fine, SCOM knows which object you are referring to.  The troublesome objects are ones like IIS websites.  They have a key property, but it’s an obscure one that isn’t commonly known by other components (in this case the Site ID, a value only unique on the IIS server and meaningless from an external connection sense).  In these scenario’s if your datasource doesn’t know what the key is, you are either going to have to try and remotely dial in and discover it yourself, or resort to some nasty workarounds using Group Populator modules which are really a different article in themselves.

Down to business – the code

Here in my example code, I’m going to assume I have already performed my steps that discover my application property values and I have a series of variables containing my application properties and my target object’s keys.  A common scenario is I have determined which SQL database they use from a connection string written in a config file or the local registry.  So now I want to discover my hosted local application, and a relationship to its SQL database (which may or may not be hosted locally, I’ll still have to submit the PrincipalName property either way since SQL DBs are hosted objects).

'Already have appName, appProperty1 and appProperty2 appComputerName defined and populated
'Likewise, databaseName, dbInstanceName and databaseServer are already populated variables
Dim scomAPI, discoveryData, applicationInstance, databaseInstance, relationshipInstance
Set scomAPI = CreateObject("MOM.ScriptAPI")
Set discoveryData = scomAPI.CreateDiscoveryData(0, "$MPElement$", "$Target/Id$")
'Discover my Application object
Set applicationInstance = discoveryData.CreateClassInstance("$MPElement[Name='MySampleMP.ApplicationClass']$")
    Call applicationInstance.AddProperty("$MPElement[Name='MySampleMP.ApplicationClass']/Name$", appName)
    Call applicationInstance.AddProperty("$MPElement[Name='MySampleMP.ApplicationClass']/FirstProperty$", appProperty1)
    Call applicationInstance.AddProperty("$MPElement[Name='MySampleMP.ApplicationClass']/SecondProperty$", appProperty2)
    Call applicationInstance.AddProperty("$MPElement[Name='Windows!Microsoft.Windows.Computer']/PrincipalName$", appComputerName)
'If my discovery is targeting my prexisting object (rule 2, second option), then i don't need to call the below line.
Call discoveryData.AddInstance(applicationInstance)

'Now create my target relationship object, which i can optionally submit with my discovery, depending on what i am doing about Rule 2
Set databaseInstance = discoveryData.CreateClassInstance("$MPElement[Name='MSSQL!Microsoft.SQLServer.Database']$")
    Call databaseInstance.AddProperty("$MPElement[Name='MSSQL!Microsoft.SQLServer.Database']/DatabaseName$", databaseName)
    Call databaseInstance.AddProperty("$MPElement[Name='MSSQL!Microsoft.SQLServer.ServerRole']/InstanceName$", dbInstanceName)
    Call databaseInstance.AddProperty("$MPElement[Name='Windows!Microsoft.Windows.Computer']/PrincipalName$", databaseServer)
'If i am going to discover this object as well as part of my Rule 2 decision, then call the below line.  Otherwise ignore it.
'Call discoveryData.AddInstance(databaseInstance)

'Now it's time to create my relationship!
Set relationshipInstance = discoveryData.CreateRelationshipInstance("$MPElement[Name='MySampleMP.Relationship.AppContainsDatabase']$")
     relationshipInstance.Source = applicationInstance
     relationshipInstance.Target = databaseInstance
Call discoveryData.AddInstance(relationshipInstance)
'Return all discovered objects.
Call scomAPI.Return(discoveryData)

That’s it! If I wanted to create multiple different relationships I’d just have to create multiple relationship instances and an instance of each target object (most likely using a For Each loop).  Again we don’t need to submit the target object instance (or even the source, if it already exists) in our discovery data unless we want to, we only need it so that we can specify it in the relationship instance.

Of course, if you are actually creating a relationship between two objects defined in your management pack, then you’ll probably want to submit both the source and target objects in your discovery (assuming they don’t already exist from a different discovery).

If you do have a scenario where you need to create a relationship based on an object with an unknown key value but other known properties, leave a comment below and I’ll look at writing another blog post detailing some of the ones I’ve come up with.

Hope this helps someone else out in the future, and saves them the experimentation and research time I lost to this rather simple activity!

Cheers,

Matthew

Posted in Computing | Tagged: , , , , | 10 Comments »

Scripting Series – Interesting things you can do with VBScript and Powershell – Part 5, Issues with Copy-item and Remove-Item

Posted by Matthew on October 13, 2011

In the final part of this series I’m going to show two strange behaviours you can get when running the Remove-Item and Copy-Item powershell cmdlets on the file system provider.

Remove-Item

This one is fairly simple – essentially what happens is you may find when you ask Remove-Item to delete a folder structure using the -Recurse  switch that it has a tendency to trip over itself and leave folders behind as they are still marked in use.. by the cmdlet!  To overcome this we can simply setup a Do While loop to check if the folder exists and attempt to remove it (and all contents) continuously until we succeed (usually completes within 2 loops).


Do
{
      Write-Host "Attempting to remove Files Recursively.."
      Start-Sleep -Seconds 1
      Remove-Item $FilesLocation -Recurse -Force -ErrorAction SilentlyContinue
}
While (Test-Path $FilesLocation -ErrorAction SilentlyContinue)

For those unfamiliar, the Do..While construct will attempt an action once, and then check the criteria to see if the action should be repeated.  In this case Test-Path will return true if the path exists and false if it does not.  So if the folder has been deleted, another attempt will be made.  The -ErrorAction SilentlyContinue parameters simply stop the commands from writing out either the error condition we are explicitly handling (files locked in use) or that the path does not exist (which is what we want in this scenario, so lets not raise an error for that state).

Copy-Item

This one has been around the internet a few times already, and in this case the solution was one I came across.  Unfortunately I’m not sure who the original author is, but if anyone knows I’ll gladly accredit it.  Anyway, the issue is that Copy-Item has a slight behavioural quirk; if you try to copy a folder, and the destination folder name already exists, the item(s) to be copied are instead placed inside the pre-existing destination folder, in a subfolder.

The result is that if you tried to copy the contents of c:\foo to c:\bar, and bar already existed you’d wind up with all your files from c:\foo inside the c:\bar\bar folder!

Thankfully, the function below sorts this behaviour out –

Function Copy-Directory
{
       Param(
       [System.String]$Source,
       [System.String]$Destination)
       $Source = $Source -replace '\*$'
       If (Test-Path $Destination)
       {
              Switch -regex ($Source)
              {
                  '\\$' {$Source = "$Source*"; break}
                  '\w$' {$Source = "$Source\*"; break}
                  Default {break}
              }
      }
      Copy-Item $Source $Destination -recurse -force
}

Now you can call Copy-Directory folder1 folder2 and get consistent results – if the destination does not exist, it is created. If the destination does exist, then all files are copied into the pre-existing folder.

The function works by testing if the destination folder already exists, and if it does, modifying the source criteria so that copy-item is instead looking for a wildcard match on the folders contents, rather than the source folder itself.

Posted in Computing | Tagged: | Leave a Comment »

Scripting Series – Interesting things you can do with VBScript and Powershell Part 4 – Setting up HyperV host networking

Posted by Matthew on October 11, 2011

As you may recall from the introduction to this series, I was tasked with creating a script that would handle the setup/tear down of student lab machines that were to be used for short training courses.  The PCs belong to the training provider and it’s up to the instructor to come in before the course and set all of the student machines up.  Often 15 times, on a sunday.

This post deals with the (relatively simple) task of setting up the virtual network adapter that is normally nearly always provided as an internal/external network on the student machines, specifically the IP settings so that the guest VMs can communicate with the host HyperV server.

Let’s take a look at the script first, and then i’ll walk you through it.  As noted in the first article, I used James O’ Neill’s fantastic HyperV Module to accomplish the HyperV lifting!

The Script


#Setup Internal HyperV Network if it doesn't already exist

If (!(Get-VMSwitch $NetworkName))
{
New-VMInternalSwitch -VirtualSwitchName $NetworkName -Force | Out-Null
}
Else
{
Write-Host "`nVirtual Network '$NetworkName' already exists, Skipping..."
}

#Setup Local Loopback adapter
$vSwitch = Get-WmiObject -Query ('Select * from Win32_PnPEntity where name = "' + $NetworkName +'"')
$Query = "Associators of {$vSwitch} where ResultClass=Win32_NetworkAdapter"
$NicName = (Get-WmiObject -query $Query ).NetConnectionID
Invoke-Expression 'netsh interface ip set address "$NicName" static 192.168.1.150 255.255.255.0'
Write-Host "Server now has IP on internal network of '192.168.1.150'"

The code is fairly self explanatory, but i’ll walk through it anyway.  First we use the HyperV module to determine if there is an Internal network with the given name in $NetworkName already in existence, and if not we create it.  If you haven’t seen it before, Out-Null is a powershell command to send pipeline information into the aether, and is useful when you don’t want a cmdlet writing back objects or text to the console during execution (a lot of people just instead write to a variable they have no intention of using).

This will create a Virtual network card on the host HyperV system, which can be seen in network connections.  The name you set in HyperV for the name of the network will be the PNP device name, as shown below..

We then use that name to associate the PNP device to the network adapter, and then invoke good old netsh to set the adapter for us automatically.

Why use those methods

I realize that the PNP device name is actually a property directly available on the Win32_NetworkAdapter class, so why didn’t I use it?  The short answer is that the NetworkAdapter can have some very odd behaviours sometimes (watch what happens to your MAC address when you disable the network adapter..) and to avoid those issue’s I only used properties of the class I knew I could rely on – namely the NetConnectionId.

I could have also used WMI to set the IP address information, but it’s nowhere near as easy as calling netsh and certainly isn’t accomplished in a single neat line.  There is no harm in doing it using WMI if you so wish (and that will be easier if you were doing complex configuration changes).

Posted in Computing | Tagged: , | Leave a Comment »

Integrating Operations Manager 2012 beta with Opalis / Orchestrator

Posted by Matthew on October 10, 2011

I’m lucky enough in my lab environment to have access to both the Orchestrator beta and both SCOM 2007 and SCOM 2012 beta.  I was recently tasked with creating a demo for a customer focused around SCOM 2012 and Orchestrator.  Whilst an updated integration pack for SCOM 2012 has yet to be released for Orchestrator (or indeed, Opalis 6.3) you can use the existing IP just fine, providing you complete a few work arounds.

Requirements

First thing – you’re going to need access to the SCOM 2007 R2 media to be able to complete this.  As directed by the SCOM Integration pack, install the SCOM 2007 R2 console on your action/runbook servers and your Client/Designer machines.  If you attempt to use the SCOM 2012 console in it’s place you will (in my experience) receive connection errors when attempting to use the IP.

Create Alert object workaround

Secondly – In order to use the Create Alert object the Integration pack will normally deploy a management pack into SCOM automatically the first time it is used.  Unfortunately the SDK for SCOM has changed and the method previously employed no longer works (you will receive an error stating as such when the object attempts to run).  In order to resolve this, you will need to :

  1. Have Opalis/Orchestrator raise an alert in a SCOM 2007 environment
  2. Export the management pack from the SCOM 2007 environment that has been automatically imported.  Note that as the MP is sealed, you will need to use the Export-ManagementPack powershell command as the GUI will have the export option grayed out.  The management pack is called Opalis Integration Library.Get-ManagementPack | ? {$_.name -match ‘Opalis’} | Export-ManagementPack -path c:\Folder\
  3. Import the management pack into your SCOM 2012 environment

Following this, you will now be able to use all of the IP objects in both SCOM 2007 and SCOM 2012.

For those without access to SCOM 2007, i’ve attached a copy of the management pack that you can import into your environment.  Note : The management pack is unsealed as it’s been exported from within a SCOM environment.  If you are uncomfortable importing an unsealed MP into your environment, do not do so, and instead utilize the method above to obtain your own (still unsealed) version of the MP.

Link to Opalis.Integration.Library.zip

Posted in Computing | Tagged: , , , , , , , | Leave a Comment »