.\Matthew Long

{An unsorted collection of thoughts}

Posts Tagged ‘VBScript’

Scripting Relationship Discovery in Operations Manager 2007 / 2012

Posted by Matthew on March 3, 2012

One of the things I’ve found myself doing for several recent customer management packs is the discovery of relationships between objects belonging to 3rd party management packs and in-house components.  In simpler terms, often linking an application to the SQL databases it relies on automatically.  This is most useful because now you don’t have to create Distributed Applications by hand for every application to build up a dependency health model, and you don’t have to replicate all the knowledge contained inside the SQL management packs.

The technique itself is actually fairly straightforward, but when trying to gather the necessary knowledge I found it all over the place and often amongst busy, confusing articles.  So here is the concise knowledge you need to implement a relationship discovery of a 3rd party object inside a VBScript discovery.

*Update* Ok so in hindsight it wasn’t that concise.  It is, at least, comprehensive.

Limitations

The first thing you need to be aware of before trying to create relationships are some of the constraints you have to work within with Operations Manager.

Rule 1 : Hosting location

If either your source or target object is hosted on an operations manager agent, then you are constrained as to the type of relationships you can create.  You can only create a relationship to perform health rollup between objects only if:

  1. Both objects are hosted on the same agent (i.e two server roles)
  2. One object is hosted on an agent, and the other is not hosted (or the object is hosted by a non hosted object).  Essentially the second object must exist on the RMS /Management servers.
  3. Neither object is hosted (both on the RMS/MS)

You can check if an object is hosted by looking at its class definition and seeing if it has the “Hosted” attribute set to “true”.

In one case, where we had a hosted application on one server and SQL and IIS components hosted on another, what this meant was having to create a non-hosted version of the application (discovered with the local hosted copy), which then had relationships to its local application object and components.  As far as the users were concerned, the non-hosted object was the instance in SCOM of their application.

The diagram below illustrates a scenario where an application has a dependency on a (already discovered and managed) SQL database.  Here we want to create a Containment relationship and rollup health monitoring of the database to the local application.  The diagram shows the two valid scenarios where you can create a relationship and the invalid (and unfortunately, most common one).

Rule 2: Source and target existence in SCOM

In order for Operations Manager to accept the relationship inside your discovery data that is returned by the discovery script, both the source and target of the relationship must either already exist, or be defined in the discovery data.  This is often fine for the source, since we have just discovered our object so we know that exists.

The issue can often be with the target, because if we attempt to define a relationship to an object assuming it already exists in SCOM and it doesn’t (say because an agent is not installed on that host, or the discovery for the object is disabled) then all discovery data from that discovery instance is discarded by SCOM with no alert in the console!  I can’t explain why this occurs, this is just my experience.  If anyone reading this knows why this occurs, feel free to get in touch!

So if your discovery were to discover your application, and 5 component relationships and only 1 of those objects doesn’t actually exist, nothing will be discovered!

What this means practically is you have two options, both of which have downsides..

  1. Discover both the source and target in your discovery, to ensure that both objects exist.
  2. Discover the source object, then have a second discovery targeting your object that creates the relationship.

The first option means that you know your objects exist, and you don’t have to actually rediscover every property since if you only discover the Key properties of the object SCOM knows it’s the same object as the pre-existing and works its magic in the background using Reference Counting to merge them.  Reference counting means that if two objects have identical keys (and in the case of hosted objects, identical parent object keys) SCOM assumes they are the same object, and simply increments the reference count for the object.  As long as the reference count remains above zero then the object continues to exist, with all discovered properties.

The first downside to this approach is that if the two objects are hosted on different agents, you will need to enable Proxy discovery on the agent running your discovery.  This is because hosted applications ultimately specify the Computer they are hosted upon, and you are now creating objects for another computer (even though it’s just a reference count).  The second downside is that when the 3rd party object (such as a SQL DB) is undiscovered by its original discovery, the object will live on in SCOM as long as you are still discovering it because the reference count is still 1, even if that object truly no longer exists (which could confuse the hell out of the SQL admins when they see a database still exists on an old server post DB migration).  It will also be missing any properties that the original discovery populated which you didn’t, which may cause all sorts of alerts to start appearing in the SCOM console when values expected by monitors are unpopulated.

So, the second option (only discover your source object, and have a second discovery to create the relationship) has the advantage that if your 3rd party object doesn’t exist, you won’t have a “ghost” object left in SCOM, but your application will exist because it was already discovered as part of a different workflow.  You also now aren’t responsible for reverse engineering how your target object (and any properties) were discovered.  The downside here is that any other objects found in your second discovery still will not be present.  So if you had an app that had SQL relationships found in one discovery and IIS websites found in another, and one IIS website was missing, all the IIS websites would not be present, but your app and SQL DBs would be.

There isn’t really a right answer, you’ve got to answer this one individually with your organisation and/or customer and determine which behaviour is better.  My gut preference is when writing MPs for customers is to use the second option, and to have a property on my object that states how many component objects there are supposed to be.  That way we don’t generate false alerts or weird behaviour, and application owners can still see when an object is missing.  When writing an MP for yourself, you can ensure that all agents hosting components get agents and that everything is discovered properly.

Rule 3:  Identifying your objects

As we will see in a moment (we’ll get to some code soon, promise!) in order to create a relationship you need to be able to uniquely identify the source and target objects.  Your source object is probably your own object you’ve just discovered (or the target of your relationship discovery if you are using two discoveries, either way you know which object you want).

Unfortunately unless you get fancy with the SCOM SDK and Powershell discoveries, the only way to specify another object outside of your discovery is to recreate it in your discovery script with the same key properties.  In order to use the SDK, you’ve got to ensure that your machine running the discovery has the SCOM SDK libs availible, which is a pretty rare occurance outside of management servers.

You don’t have to actually submit the object in your discovery results (depending on what you are doing with Rule 2, of course), but you need to be able to specify all key properties of the target object, and if it is hosted, it’s parent object’s key properties.  If the hosting parent doesn’t have a key, then specify its parent’s keys.  Eventually every hosted object has a parent with keys because you’ll hit a Windows.Computer or Unix.Computer object.

So for a working example: SQL Databases.  In order to uniquely identify a SQL DB you’ll need to know:

  1. It’s key property, the Database Name.
  2. It’s hosting object’s key property, the SQL Instance name.
  3. SQL Databases are hosted, and therefore the principal name of the agent computer these objects are hosted on (either the local machine’s principal name, or the remote machines principal name if the DB is on a remote SQL server).

Sounds simple, but what about objects that don’t have a key?  Well since there is no key, there must only ever be one instance on that host so that’s fine, SCOM knows which object you are referring to.  The troublesome objects are ones like IIS websites.  They have a key property, but it’s an obscure one that isn’t commonly known by other components (in this case the Site ID, a value only unique on the IIS server and meaningless from an external connection sense).  In these scenario’s if your datasource doesn’t know what the key is, you are either going to have to try and remotely dial in and discover it yourself, or resort to some nasty workarounds using Group Populator modules which are really a different article in themselves.

Down to business – the code

Here in my example code, I’m going to assume I have already performed my steps that discover my application property values and I have a series of variables containing my application properties and my target object’s keys.  A common scenario is I have determined which SQL database they use from a connection string written in a config file or the local registry.  So now I want to discover my hosted local application, and a relationship to its SQL database (which may or may not be hosted locally, I’ll still have to submit the PrincipalName property either way since SQL DBs are hosted objects).

'Already have appName, appProperty1 and appProperty2 appComputerName defined and populated
'Likewise, databaseName, dbInstanceName and databaseServer are already populated variables
Dim scomAPI, discoveryData, applicationInstance, databaseInstance, relationshipInstance
Set scomAPI = CreateObject("MOM.ScriptAPI")
Set discoveryData = scomAPI.CreateDiscoveryData(0, "$MPElement$", "$Target/Id$")
'Discover my Application object
Set applicationInstance = discoveryData.CreateClassInstance("$MPElement[Name='MySampleMP.ApplicationClass']$")
    Call applicationInstance.AddProperty("$MPElement[Name='MySampleMP.ApplicationClass']/Name$", appName)
    Call applicationInstance.AddProperty("$MPElement[Name='MySampleMP.ApplicationClass']/FirstProperty$", appProperty1)
    Call applicationInstance.AddProperty("$MPElement[Name='MySampleMP.ApplicationClass']/SecondProperty$", appProperty2)
    Call applicationInstance.AddProperty("$MPElement[Name='Windows!Microsoft.Windows.Computer']/PrincipalName$", appComputerName)
'If my discovery is targeting my prexisting object (rule 2, second option), then i don't need to call the below line.
Call discoveryData.AddInstance(applicationInstance)

'Now create my target relationship object, which i can optionally submit with my discovery, depending on what i am doing about Rule 2
Set databaseInstance = discoveryData.CreateClassInstance("$MPElement[Name='MSSQL!Microsoft.SQLServer.Database']$")
    Call databaseInstance.AddProperty("$MPElement[Name='MSSQL!Microsoft.SQLServer.Database']/DatabaseName$", databaseName)
    Call databaseInstance.AddProperty("$MPElement[Name='MSSQL!Microsoft.SQLServer.ServerRole']/InstanceName$", dbInstanceName)
    Call databaseInstance.AddProperty("$MPElement[Name='Windows!Microsoft.Windows.Computer']/PrincipalName$", databaseServer)
'If i am going to discover this object as well as part of my Rule 2 decision, then call the below line.  Otherwise ignore it.
'Call discoveryData.AddInstance(databaseInstance)

'Now it's time to create my relationship!
Set relationshipInstance = discoveryData.CreateRelationshipInstance("$MPElement[Name='MySampleMP.Relationship.AppContainsDatabase']$")
     relationshipInstance.Source = applicationInstance
     relationshipInstance.Target = databaseInstance
Call discoveryData.AddInstance(relationshipInstance)
'Return all discovered objects.
Call scomAPI.Return(discoveryData)

That’s it! If I wanted to create multiple different relationships I’d just have to create multiple relationship instances and an instance of each target object (most likely using a For Each loop).  Again we don’t need to submit the target object instance (or even the source, if it already exists) in our discovery data unless we want to, we only need it so that we can specify it in the relationship instance.

Of course, if you are actually creating a relationship between two objects defined in your management pack, then you’ll probably want to submit both the source and target objects in your discovery (assuming they don’t already exist from a different discovery).

If you do have a scenario where you need to create a relationship based on an object with an unknown key value but other known properties, leave a comment below and I’ll look at writing another blog post detailing some of the ones I’ve come up with.

Hope this helps someone else out in the future, and saves them the experimentation and research time I lost to this rather simple activity!

Cheers,

Matthew

Posted in Computing | Tagged: , , , , | 10 Comments »

Scripting Series – Interesting things you can do with VBScript and Powershell – Post 2, UAC Elevation

Posted by Matthew on March 6, 2011

In the first challenge in this series, I covered script self deletion.  In this post, i’m going to talk about dealing with UAC elevation in VB and Powershell scripts, way’s of detecting if we are running as an administrator, and how to trigger a request for elevation.  There are a lot of other ways of doing this, but these are two methods that I find work pretty well.

Firstly, a note on UAC Elevation and how it works.  Elevation is performed on a per-process basis, at initialisation,  so once a process has been started without administrative rights, the only way to gain those rights is to restart the process or launch a child process and request that it be granted admin rights.

The other important thing to remember, is that when a non-elevated process checks group memberships from a user context that does have admin rights, that user is not returned in the results set.  effectively, to non-elevated processes, no matter what user the process is run with, that user is not in any admin groups.

First up, VBScript.

Option Explicit
Dim App
If WScript.Arguments.length =0 then
  Set App = CreateObject("Shell.Application")
  App.ShellExecute "wscript.exe", Chr(34) & WScript.ScriptFullName & Chr(34) & " uac", "", "runas",1

Else

  'Perform Script Functions...

End If

WScript.Quit()

This is quite an elegant solution, if not the most efficient.  Essentially what the script does is first check to see if the script was started with an argument indicating we’ve run the process as an administrator explicitly.  If that argument is not found, we create a child process with the RunAs verb, and wait for that process to finish before we continue.  Starting the process with the RunAs verb will prompt for confirmation of administrative rights if we are not already in such a context.  The second process here is the WScript engine and our current VBScript’s path.  If our argument is found (in this case, the first argument uac) then rather than launching our child process, we instead carry on with our scripts main workload.

Obviously if your script accepts arguments, make sure you pass the other arguments onto your new process accordingly!  Note that in the above script, if you run the script with UAC off, or if you launch it the first time with admin rights, you won’t see a prompt and the script will just continue (but still create the second process).

Next up, Powershell.  As the Powershell process isn’t quite as  lightweight, we’ll do a check to see if this process is operating with the correct rights before trying to elevate.


     Function Test-CurrentAdminRights
     {
      #Return $True if process has admin rights, otherwise $False
      $user = [System.Security.Principal.WindowsIdentity]::GetCurrent()
      $Role = [System.Security.Principal.WindowsBuiltinRole]::Administrator
      return (New-Object Security.Principal.WindowsPrincipal $User).IsInRole($Role)
     }

The function Test-CurrentAdminRights checks to see if the user that the script (the powershell.exe process) is running under is in the Administrator role.  As I mentioned earlier as the user isn’t marked as being in the administrative groups unless the process is operating as an admin, this will only ever return True if the process is running under an administrative context.

Personally, if the function returns false i’d prefer to throw an exception or message back to the user to ask them to launch the script from an administrative console.  The reason for this is that when we launch a new powershell process it might not have access to the same snappins, variables, current working directory (administrative PS consoles start in C:\Windows\System32), etc.  However, the below function will elevate the current script if you need it to :

Function Invoke-AsAdmin()
{
    Param
         (
    [System.String]$ArgumentString = ""
         )
    $NewProcessInfo = new-object "Diagnostics.ProcessStartInfo"
    $NewProcessInfo.FileName = [System.Diagnostics.Process]::GetCurrentProcess().path    
    $NewProcessInfo.Arguments = "-file " + $MyInvocation.MyCommand.Definition + " $ArgumentString"
    $NewProcessInfo.Verb = "runas"
    $NewProcess = [Diagnostics.Process]::Start($NewProcessInfo)
    $NewProcess.WaitForExit()
}

Just pass in any arguments you need to this function, and it will create the necessary process.

Posted in Computing | Tagged: , | Leave a Comment »

Scripting Series – Interesting things you can do with VBScript and Powershell – Post 1, Self Deletion

Posted by Matthew on February 25, 2011

For the first challange i’m going to tackle in this series, we have the problem of self deletion.

After quite a bit of experimentation, i found a powershell script cannot delete itself without help from some outside source.  Having the script setup a scheduled task on a timer to delete itself is one option, and Scheduled tasks in powershell is certainly well documented on the internet. 

However, as i already wanted a simple way of students cleaning up their own machines (and telling someone who has never used powershell to run as an admin, set execution policy etc etc isn’t fun) i instead decided to go with a VBScript.  As the Windows Scripting Host copies the entire script into memory and then executes it, this means that VBScripts can not only trigger my cleanup powershell script with the correct arguments, it can then also delete the .ps1 file and itself!  All the student has to do is double click on a shortcut on their desktop.

Here is a sample file that does the job.

Option Explicit
Dim FSO,VbScript,PowershellScript, Shell, Cmd, CurrentDirectory, Answer,
Set Shell  = CreateObject("WScript.Shell")
Set FSO = CreateObject("Scripting.FileSystemObject")
PowerShellScript = "C:\Training Lab\CleanupScript.ps1"
Answer = MsgBox("Are you sure you want to Remove all lab files?",VBYesNo,"Cleanup Confirm")
If Answer = 6 Then
 'Copy script to current folder
 CurrentDirectory = left(WScript.ScriptFullName,(Len(WScript.ScriptFullName))-(Len(WScript.ScriptName)))
 FSO.GetFile(PowerShellScript).Copy CurrentDirectory & "CleanupScript.ps1", True
 
 'Run Powershell Script
 Cmd = "powershell -executionpolicy RemoteSigned -Command ""& {cd "& CurrentDirectory &"; .\CleanupScript.ps1}"""
 Shell.Run appCmd, 4, True
 'Cleanup Files
 VbScript = Wscript.ScriptFullName
 FSO.DeleteFile CurrentDirectory & "CleanupScript.ps1", True

 FSO.DeleteFile VbScript, True
 WScript.Echo "Cleanup Finished"
Else
    Msgbox "Cleanup Cancelled."
End If
WScript.Quit

This fairly simple script sits on the users desktop and when run, will prompt the user if they would like to cleanup the lab (Just going ahead and doing it doesn’t seem like a wise idea for something so easily launched!)
Once confirmed, we copy the powershell script out of it’s resources folder to the current directory. This may not be neccessary, the reason i had to do it was that i placed the powershell script in a folder it was going to try and delete, so running it from that location wasn’t going to work.

We then build an argument string to run powershell. I’ve used -command rather than -file so that i can change the working directory of powershell. This is becuase my script is going to use the working directory, and when running using an elevated shell i don’t want the path to be c:\windows\system32!  I’ve also specified “-executionpolicy RemoteSigned” so that I don’t have to worry about what the system’s execution policy is currently set to.

Make sure when using the Shell.Run method you specify the bWaitOnReturn argument as True. Otherwise, your VBScript is going to try and delete things whilst they are still in use.  I’ve specified that the powershell window be shown (Mode 4) as the script displays progress reports to the user, but you could hide it using mode 0 if you wished.

Finally, we get the the path to our currently executing vbscript and delete both the powershell script and the VBScript itself.  All done!

Obviously this method has a couple of drawbacks.  Now i have to maintain two script files, and what if i change the name of the powershell script (or the path?).  Additionally, what if my script needs Admin priviledges and UAC mode is enabled?

I’ll address all of those points in later articles in this series.

Posted in Computing | Tagged: , | 1 Comment »

Scripting Series – Interesting things you can do with VBScript and Powershell

Posted by Matthew on February 25, 2011

I was recently tasked with creating a script (language was my choice) that can set up a bunch of machines for students undertaking some training using virtual machines.  The student servers are not managed by System Center Virtual Machine Manager and may not even be network connected, so the script was going to have to do all the hard work of copying machines and resource files from the USB source, staging them in sensible places, importing the VMs into HyperV and performing some other configuration tasks.  As the training was also taking place in a public training centre, It also had to help tear the whole thing down again afterwards, including (in order to protect IP) itself!

Naturally Powershell was a good choice for this task as it can accomplish most of the above without breaking a sweat.  Rather than re-invent the wheel, I used James O’ Neill’s fantastic HyperV Module.  All I had to deal with now were some other interesting challenges, namely

Across a series of blog posts, I’ll show how i overcame these problems and created a pretty feature rich script for setting up lab environments.  Enjoy!

Posted in Computing | Tagged: , , | 3 Comments »