# Posts Tagged ‘Powershell’

## Deleting a SCOM MP which the Microsoft.SystemCenter.SecureReferenceOverride MP depends upon

Posted by Matthew on December 14, 2012

If you’ve ever imported a management pack which contained a Run As profile into SCOM, you will know the pain that awaits you if you ever need to delete it (most commonly when the latest version of the MP doesn’t support an upgrade via import).

The most discussed option I’ve seen for dealing with this is to:

1. Delete the offending Run As Account(s) from the Run as Profile.
2. Export the Microsoft.SystemCenter.SecureReferenceOverride MP
3. Remove the reference that gets left behind when you delete a Run As Profile configuration from the raw xml.
4. Increment the version number (again in the xml)
5. Reimport it.

However, there is another way that doesn’t rely on you having to import/export or having to touch any xml, just a bit of Powershell!  The below is for 2012, but the same principle applies for 2007, just use the appropriate cmdlets/methods.

1. Open a powershell session with the Operations Manager module/snappin loaded.
2. Type: $MP = Get-SCOMManagementpack -Name Microsoft.SystemCenter.SecureReferenceOverride 3. Now we can view the referenced management packs by typing$MP.References
4. From the list of items in the Key column, note down the alias of your MP you wish to delete.  If you are having trouble finding it, the Value column will list the full ID of the MP.
5. Now that we know the MP alias, we can remove it from the Secure Reference MP by typing $MP.References.Remove(“yourMPAliasGoesHere“) 6. Now we can verify the MP is valid by entering$MP.Verify()  to ensure there are no orphaned overrides, etc.
}
While (Test-Path $FilesLocation -ErrorAction SilentlyContinue)  For those unfamiliar, the Do..While construct will attempt an action once, and then check the criteria to see if the action should be repeated. In this case Test-Path will return true if the path exists and false if it does not. So if the folder has been deleted, another attempt will be made. The -ErrorAction SilentlyContinue parameters simply stop the commands from writing out either the error condition we are explicitly handling (files locked in use) or that the path does not exist (which is what we want in this scenario, so lets not raise an error for that state). ## Copy-Item This one has been around the internet a few times already, and in this case the solution was one I came across. Unfortunately I’m not sure who the original author is, but if anyone knows I’ll gladly accredit it. Anyway, the issue is that Copy-Item has a slight behavioural quirk; if you try to copy a folder, and the destination folder name already exists, the item(s) to be copied are instead placed inside the pre-existing destination folder, in a subfolder. The result is that if you tried to copy the contents of c:\foo to c:\bar, and bar already existed you’d wind up with all your files from c:\foo inside the c:\bar\bar folder! Thankfully, the function below sorts this behaviour out – Function Copy-Directory { Param( [System.String]$Source,
[System.String]$Destination)$Source = $Source -replace '\*$'
If (Test-Path $Destination) { Switch -regex ($Source)
{
'\\$' {$Source = "$Source*"; break} '\w$' {$Source = "$Source\*"; break}
Default {break}
}
}
Copy-Item $Source$Destination -recurse -force
}


Now you can call Copy-Directory folder1 folder2 and get consistent results – if the destination does not exist, it is created. If the destination does exist, then all files are copied into the pre-existing folder.

The function works by testing if the destination folder already exists, and if it does, modifying the source criteria so that copy-item is instead looking for a wildcard match on the folders contents, rather than the source folder itself.

## Scripting Series – Interesting things you can do with VBScript and Powershell Part 4 – Setting up HyperV host networking

Posted by Matthew on October 11, 2011

As you may recall from the introduction to this series, I was tasked with creating a script that would handle the setup/tear down of student lab machines that were to be used for short training courses.  The PCs belong to the training provider and it’s up to the instructor to come in before the course and set all of the student machines up.  Often 15 times, on a sunday.

This post deals with the (relatively simple) task of setting up the virtual network adapter that is normally nearly always provided as an internal/external network on the student machines, specifically the IP settings so that the guest VMs can communicate with the host HyperV server.

Let’s take a look at the script first, and then i’ll walk you through it.  As noted in the first article, I used James O’ Neill’s fantastic HyperV Module to accomplish the HyperV lifting!

## The Script


#Setup Internal HyperV Network if it doesn't already exist

If (!(Get-VMSwitch $NetworkName)) { New-VMInternalSwitch -VirtualSwitchName$NetworkName -Force | Out-Null
}
Else
{
Write-Host "nVirtual Network '$NetworkName' already exists, Skipping..." } #Setup Local Loopback adapter$vSwitch = Get-WmiObject -Query ('Select * from Win32_PnPEntity where name = "' + $NetworkName +'"')$Query = "Associators of {$vSwitch} where ResultClass=Win32_NetworkAdapter"$NicName = (Get-WmiObject -query $Query ).NetConnectionID Invoke-Expression 'netsh interface ip set address "$NicName" static 192.168.1.150 255.255.255.0'
Write-Host "Server now has IP on internal network of '192.168.1.150'"



The code is fairly self explanatory, but i’ll walk through it anyway.  First we use the HyperV module to determine if there is an Internal network with the given name in $NetworkName already in existence, and if not we create it. If you haven’t seen it before, Out-Null is a powershell command to send pipeline information into the aether, and is useful when you don’t want a cmdlet writing back objects or text to the console during execution (a lot of people just instead write to a variable they have no intention of using). This will create a Virtual network card on the host HyperV system, which can be seen in network connections. The name you set in HyperV for the name of the network will be the PNP device name, as shown below.. We then use that name to associate the PNP device to the network adapter, and then invoke good old netsh to set the adapter for us automatically. ## Why use those methods I realize that the PNP device name is actually a property directly available on the Win32_NetworkAdapter class, so why didn’t I use it? The short answer is that the NetworkAdapter can have some very odd behaviours sometimes (watch what happens to your MAC address when you disable the network adapter..) and to avoid those issue’s I only used properties of the class I knew I could rely on – namely the NetConnectionId. I could have also used WMI to set the IP address information, but it’s nowhere near as easy as calling netsh and certainly isn’t accomplished in a single neat line. There is no harm in doing it using WMI if you so wish (and that will be easier if you were doing complex configuration changes). Posted in Computing | Tagged: , | Leave a Comment » ## Scripting Series – Interesting things you can do with VBScript and Powershell – Post 2, UAC Elevation Posted by Matthew on March 6, 2011 In the first challenge in this series, I covered script self deletion. In this post, i’m going to talk about dealing with UAC elevation in VB and Powershell scripts, way’s of detecting if we are running as an administrator, and how to trigger a request for elevation. There are a lot of other ways of doing this, but these are two methods that I find work pretty well. Firstly, a note on UAC Elevation and how it works. Elevation is performed on a per-process basis, at initialisation, so once a process has been started without administrative rights, the only way to gain those rights is to restart the process or launch a child process and request that it be granted admin rights. The other important thing to remember, is that when a non-elevated process checks group memberships from a user context that does have admin rights, that user is not returned in the results set. effectively, to non-elevated processes, no matter what user the process is run with, that user is not in any admin groups. First up, VBScript. Option Explicit Dim App If WScript.Arguments.length =0 then Set App = CreateObject("Shell.Application") App.ShellExecute "wscript.exe", Chr(34) & WScript.ScriptFullName & Chr(34) & " uac", "", "runas",1 Else 'Perform Script Functions... End If WScript.Quit()  This is quite an elegant solution, if not the most efficient. Essentially what the script does is first check to see if the script was started with an argument indicating we’ve run the process as an administrator explicitly. If that argument is not found, we create a child process with the RunAs verb, and wait for that process to finish before we continue. Starting the process with the RunAs verb will prompt for confirmation of administrative rights if we are not already in such a context. The second process here is the WScript engine and our current VBScript’s path. If our argument is found (in this case, the first argument uac) then rather than launching our child process, we instead carry on with our scripts main workload. Obviously if your script accepts arguments, make sure you pass the other arguments onto your new process accordingly! Note that in the above script, if you run the script with UAC off, or if you launch it the first time with admin rights, you won’t see a prompt and the script will just continue (but still create the second process). Next up, Powershell. As the Powershell process isn’t quite as lightweight, we’ll do a check to see if this process is operating with the correct rights before trying to elevate.  Function Test-CurrentAdminRights { #Return$True if process has admin rights, otherwise $False$user = [System.Security.Principal.WindowsIdentity]::GetCurrent()
$Role = [System.Security.Principal.WindowsBuiltinRole]::Administrator return (New-Object Security.Principal.WindowsPrincipal$User).IsInRole($Role) }  The function Test-CurrentAdminRights checks to see if the user that the script (the powershell.exe process) is running under is in the Administrator role. As I mentioned earlier as the user isn’t marked as being in the administrative groups unless the process is operating as an admin, this will only ever return True if the process is running under an administrative context. Personally, if the function returns false i’d prefer to throw an exception or message back to the user to ask them to launch the script from an administrative console. The reason for this is that when we launch a new powershell process it might not have access to the same snappins, variables, current working directory (administrative PS consoles start in C:\Windows\System32), etc. However, the below function will elevate the current script if you need it to : Function Invoke-AsAdmin() { Param ( [System.String]$ArgumentString = ""
)
$NewProcessInfo = new-object "Diagnostics.ProcessStartInfo"$NewProcessInfo.FileName = [System.Diagnostics.Process]::GetCurrentProcess().path
$NewProcessInfo.Arguments = "-file " +$MyInvocation.MyCommand.Definition + " $ArgumentString"$NewProcessInfo.Verb = "runas"
$NewProcess = [Diagnostics.Process]::Start($NewProcessInfo)
$NewProcess.WaitForExit() }  Just pass in any arguments you need to this function, and it will create the necessary process. Posted in Computing | Tagged: , | Leave a Comment » ## Scripting Series – Interesting things you can do with VBScript and Powershell – Post 1, Self Deletion Posted by Matthew on February 25, 2011 For the first challange i’m going to tackle in this series, we have the problem of self deletion. After quite a bit of experimentation, i found a powershell script cannot delete itself without help from some outside source. Having the script setup a scheduled task on a timer to delete itself is one option, and Scheduled tasks in powershell is certainly well documented on the internet. However, as i already wanted a simple way of students cleaning up their own machines (and telling someone who has never used powershell to run as an admin, set execution policy etc etc isn’t fun) i instead decided to go with a VBScript. As the Windows Scripting Host copies the entire script into memory and then executes it, this means that VBScripts can not only trigger my cleanup powershell script with the correct arguments, it can then also delete the .ps1 file and itself! All the student has to do is double click on a shortcut on their desktop. Here is a sample file that does the job. Option Explicit Dim FSO,VbScript,PowershellScript, Shell, Cmd, CurrentDirectory, Answer, Set Shell = CreateObject("WScript.Shell") Set FSO = CreateObject("Scripting.FileSystemObject") PowerShellScript = "C:\Training Lab\CleanupScript.ps1" Answer = MsgBox("Are you sure you want to Remove all lab files?",VBYesNo,"Cleanup Confirm") If Answer = 6 Then 'Copy script to current folder CurrentDirectory = left(WScript.ScriptFullName,(Len(WScript.ScriptFullName))-(Len(WScript.ScriptName))) FSO.GetFile(PowerShellScript).Copy CurrentDirectory & "CleanupScript.ps1", True 'Run Powershell Script Cmd = "powershell -executionpolicy RemoteSigned -Command ""& {cd "& CurrentDirectory &"; .\CleanupScript.ps1}""" Shell.Run appCmd, 4, True 'Cleanup Files VbScript = Wscript.ScriptFullName FSO.DeleteFile CurrentDirectory & "CleanupScript.ps1", True FSO.DeleteFile VbScript, True WScript.Echo "Cleanup Finished" Else Msgbox "Cleanup Cancelled." End If WScript.Quit ` This fairly simple script sits on the users desktop and when run, will prompt the user if they would like to cleanup the lab (Just going ahead and doing it doesn’t seem like a wise idea for something so easily launched!) Once confirmed, we copy the powershell script out of it’s resources folder to the current directory. This may not be neccessary, the reason i had to do it was that i placed the powershell script in a folder it was going to try and delete, so running it from that location wasn’t going to work. We then build an argument string to run powershell. I’ve used -command rather than -file so that i can change the working directory of powershell. This is becuase my script is going to use the working directory, and when running using an elevated shell i don’t want the path to be c:\windows\system32! I’ve also specified “-executionpolicy RemoteSigned” so that I don’t have to worry about what the system’s execution policy is currently set to. Make sure when using the Shell.Run method you specify the bWaitOnReturn argument as True. Otherwise, your VBScript is going to try and delete things whilst they are still in use. I’ve specified that the powershell window be shown (Mode 4) as the script displays progress reports to the user, but you could hide it using mode 0 if you wished. Finally, we get the the path to our currently executing vbscript and delete both the powershell script and the VBScript itself. All done! Obviously this method has a couple of drawbacks. Now i have to maintain two script files, and what if i change the name of the powershell script (or the path?). Additionally, what if my script needs Admin priviledges and UAC mode is enabled? I’ll address all of those points in later articles in this series. Posted in Computing | Tagged: , | 1 Comment » ## Scripting Series – Interesting things you can do with VBScript and Powershell Posted by Matthew on February 25, 2011 I was recently tasked with creating a script (language was my choice) that can set up a bunch of machines for students undertaking some training using virtual machines. The student servers are not managed by System Center Virtual Machine Manager and may not even be network connected, so the script was going to have to do all the hard work of copying machines and resource files from the USB source, staging them in sensible places, importing the VMs into HyperV and performing some other configuration tasks. As the training was also taking place in a public training centre, It also had to help tear the whole thing down again afterwards, including (in order to protect IP) itself! Naturally Powershell was a good choice for this task as it can accomplish most of the above without breaking a sweat. Rather than re-invent the wheel, I used James O’ Neill’s fantastic HyperV Module. All I had to deal with now were some other interesting challenges, namely Across a series of blog posts, I’ll show how i overcame these problems and created a pretty feature rich script for setting up lab environments. Enjoy! Posted in Computing | Tagged: , , | 3 Comments » ## MS Active Directory Powershell Module seems a bit hit and miss… Posted by Matthew on April 9, 2009 I’m at a Microsoft Active Directory Services workshop this week, and one of the things i’ve come accross (a little late, it seems..) is the Active Directory Powershell module that ships with Server 2008 R2. It’s… interesting. Obviously you’ve got a lot of 3rd party AD cmdlets and scripts already out there, so obviously comparisons are going to be drawn. While some of the decisions and implementations seem sensible, I find two things a little odd. The first one is – why on earth does the default psdrive setup by the AD provider use X500 format? This means that because the paths now contain ‘,’ and ‘=’ you’ve got to encase paths in speech marks and tab completion will not function for paths! This seems bizzare in the face of the fact that you can create a drive using Canonical format instead (which is firstly a damn site easier to read and type, and secondly support tab completion) but you have to use a switch that isn’t normally recognised by new-psdrive… Second thing – the objects returned by cmdlets. One of the things that i believe is important about powershell is that once you’ve learned how to retrieve and filter objects in one sphere, you’ve pretty much got it down for anything else we can hook into. Sending an object off to get-member to explore it becomes pretty standard behaviour. However, the AD cmdlets only return an extremely limited set of values for most objects. Users objects in particular are crippled by this. While all the AD cmdlets have a -filter attribute that allows us to search on properties that aren’t normally retrieved (read – most of them!) it would have been nice to be able to type: Get-ADUser | Where {$_.Description -like “*”} | Ft DisplayName,Description

and get a listing back.  Now, I understand that by using the -filter param i’m not grabbing all AD users and then searching through them on the local machine, but the fact that even with -filter param the searched for attribute values are not appended to the returned object means that although i can go and grab all users out there that have a VBScript set as their logonscript, i can’t display the script in the results!

…Has no one on the AD Powershell team seen the Exchange *-User cmdlets?  They may not be able to return custom attributes (or even most hidden ones) but at least they return the ones that you’re probably going to want to use most of the time.  Or, use the same method as the ADSI provider, and provide a subset of attributes when returning results, and append the attribute searched on to the return object.

Still a good start overall though, especially the way they’ve handled connection state (as long as you don’t mind changing path during heavy operations).  Let’s just hope that they make some changes to behaviour before the final release..

## Thoughts after the “Managing Windows Servers using Powershell V2” Technet event

Posted by Matthew on February 18, 2009

It’s been just over a week since myself and a colleague attended a Technet event in London centred around managing Windows servers (and, to a greater extent, everything) using Powershell V2. I thought it was about time I A) post my thoughts about what was discussed there and B) actually kept up with this blog.

The event itself was very informal, hosted as it was by James o’Neil [MS Evangelist] and Richard Siddaway [MVP] so it was blessedly low on marketing and high on content.  I think overall it somewhat actually deviated away from “Using Powershell V2” to “Using Powershell with Windows 2008 R2” which is an important but sutble difference.  There was an offer of free pizza afterwards (the Powershell user group were meeting directly afterwards) but myself and my associate didn’t have the time.

The two things I really took away from it were that :

1. Contrary to my previous blogpost were I spoke about Powershell remoting limitations, Server 2003 and Windows XP will be capable of hosting / receiving remoting commands at V2 release.
2. Every Module / Provider / Snapin you add to your system dramatically streamlines the way you work with interlinking or tiered systems.

I’ll give you an example for No. 2 that was mentioned at the Event – You lose one of your webfarm boxes that happens to be running Server 2008 and is serving content using IIS7 (ok, so it’s not actually a realistic example for most webfarms, but stay with me..).  Using Powershell you can provision some resources on your ESX/HyperV host, load a VM template and have a box running Server within minutes, add the IIS feature and move content & config onto it from one of your machines in the farm, configure and add it into the Network Load Balancing setup for the Farm and start it serving content.  From one interface.  On your desktop.  From one script, if you like.

I think Richard Siddaway said it best “Powershell itself isn’t important.  It’s the Providers and Modules, and how you use them, that’s important”.

Off the back of this, I went and got hold of the VMWare, Exchange and SQL2008 providers & snapins and I have to say i’m finding my poor rdp shortcut underused and seeming lacklustre…

I may post the adventures involved in installing the SQL Powershell features on a non SQL server box, as it wasn’t as easy as it should be…