Class Based DSC Resources

Folder Structure

Class based resources are quite simple compare to MOF based and only need two files

  • ResourceName
    • ResourceName.psm1
    • ResourceName.psd1


Declare the DscClass and DscProperties

As with PSClasses you need to define your class. For DSC Resource you mustuse [DscResource()] attribute for the class and the [DscProperty()] attribute for class properies. Not here also we have used an Enum as the type for one of these properties

Enum Ensure {
Class ResourceName {


   [DscProperty()] #This Means Options
   [Ensure]$Param3 #Here we use the Enum declared above

   [DscProperty(NotConfigurable)] #This means Read-Only


Declare Get, Set, Test & Custom Methods

Instead of Get/Set/Test-TargetResource Class based Resource use Get(), Set() and Test() methods. This are slightly stricter in that they MUST use the return keyword and can only return what they are cast as, i.e. a boolean for Test()

Class ResourceName {
   [ResourceName]Get() {
      Return $this
   [Void]Set() {


   [Bool]Test() {
      Return $True/$False


Create a Manifest File

Here is the minimum you info you need to provide for a class based DSC resource. Note here you can choose which DSC Resource to export. It is also a good Idea to set the minimum powershell version to 5.0

# Script module or binary module file associated with this manifest. 
RootModule = 'ResourceName.psm1'

DscResourcesToExport = @(

# Version number of this module.
ModuleVersion = '1.0'

# ID used to uniquely identify this module
GUID = '81624038-5e71-40f8-8905-b1a87afe22d7' 

# Minimum version of the Windows PowerShell engine required by this module
PowerShellVersion = '5.0' 


Help & Examples

Microsoft Documentation
My GitHub

File System Tunneling

If you were to delete a file and recreate it with the same name you might notice that the creation time is preserved. This is a feature known as file system tunneling and can produce some unexpected behavior if you were not aware of it.

For instance try this:

#create a file and write to it, get the creation time
$file = “C:\tmp\hello.txt”
"something" | Out-File $file
gci $file | Select CreationTime

#Now remove that file, wait 5 seconds and recreate it
Remove-Item $file
Start-Sleep -Seconds 5
"somethingelse" | Out-File $file
gci $file | Select CreationTime

You should see the creation time has not changed. But if you put in a sleep or 15 seconds instead of 5 the creation time WILL have changed. This, it turns out is actually a feature not a bug. See the links below for more information,

Notes on Windows Software Install/Uninstall

When installing by .msi a registry key is created below with a unique GUID for the app.

HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\[ProductId GUID]

removing keys from here will remove the app from the control panel but windows will still think its installed which could cause issues when installing the same app with a higher version. This is because Windows copies the .msi into ”C:\Windows\Installer” (Renamed Randomly). You can find out which of these .msi files relates to the app by checking this registry key:

HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Installer\UserData\[InternalUserId]\Products\[some random guid-like sequence of chars identifying to windows your installation]\InstallProperties

Management Model

Glossary of Terms
  • CIM: Common Information Model (CIM) is the DMTF standard [DSP0004] for describing the structure and behavior of managed resources such as storage, network, or software components.
  • WMI: Windows Management Instrumentation (WMI) is a CIM server that implements the CIM standard on Windows.
  • WS-Man: WS-Management (WS-Man) protocol is a SOAP-based, firewall-friendly protocol for management clients to communicate with CIM servers.
  • WinRM: Windows Remote Management (WinRM) is the Microsoft implementation of the WS-Man protocol on Windows.


CIM (Common Information Model)

The “New” WMI Stack. CIM is a vendor neutral way of representing management information, In Win2012, Microsoft pivoted a bit, and brought WMI in line with the newest and finalized CIM standards, CIMv2. They implemented a standardized, HTTP-based protocol for communicating with remote machines. That protocol is WS-MAN, or Web Services for Management; more formally, it’s WS-Management. This is the same protocol used by PowerShell Remoting (Windows Remote Management, or WinRM). WinRM and CIM aren’t the same thing, but they do use the same underlying communication protocol. At this point, Microsoft started using “CIM” to refer to this newer, standards-compliant version of WMI

WMI (Windows Management Instrumentation)

This is a Microsoft implementation of early CIM standards. Lacking a protocol definition, Microsoft used DCOM, or Distributed COM, which was based on RPCs, or Remote Procedure Calls. Both were prevalent in Windows NT 4.0, which is where WMI was first introduced.

WMI is build around a repository, which is where all of its management information lives. The repository isn’t exactly a database, but you can think of it that way. The information gets into the repository by means of many different providers. A provider is a chunk of code, usually written in C++, that goes out and gets whatever information, and then makes that information available in the repository. So, the more providers you have, the more information WMI can offer.

OMI (Open Management Instrumentation)

This is a much lighter management system that can be run on hardware such as switches, etc. It is written in C which makes it easy to port to almost any device ass this is a lowest common denominator of code. In theory this could replace the CIM (“New” WMI stack) but would require a lot of rewriting of providers which will be too expensive a consideration for now.

SNMP (Simple Network Management Protocol)



UEFI is a firmware written by a manufacturer, thus they are all different but follow the same Spec. UEFI replaces BIOS as a more sophisticated approach to low-level system management. UEFI can access all of the system memory, can use a segment of disk space (The EFI Partition), and fully supports GPT disks (BIOS can only support MBR). UEFI allows for better integration between hardware and the OS such as Windows 8’s ability to choose boot options from the OS (Shift + Restart). One important feature is Secure Boot. This is a method of stopping unauthorized OS’s (or Malware/Root Kits) from booting up. When secure boot is turned on only OS’s that match an internal “key” are allowed to execute.

BIOS Booting is very simple:

  •  Power up the system
  •  POST completes
  •  BIOS will attempt to boot from the first item on the boot order list
  •  If booting from a local disk the BIOS will look at the MBR (First Sector), which contains the Partitioning, File Systems and Boot Loader
  •  The Boot Loader takes over and may run additional code depending on the OS

UEFI Booting is completely different but follows a similar pattern…

  • Power up System
  •  UEFI Boot Manager loads and consults its list of boot devices (See below definitions). For Example:

Boot0002* Fedora HD(1,800,61800,6d98f360-cb3e-4727-8fed-5ce0c040365d)File(\EFI\fedora\grubx64.efi)

  • This tells the system exactly which partition of which disk should be chosen to boot.
  •  The UEFI Boot Loader (.efi file) is then executed to

So when installing an OS on UEFI hardware the OS must create an EFI partition (Based on the FAT File system) for the .efi to live in and add its entry to the UEFI Boot Manager

UEFI Boot Manager provides a list of Bootable devices, These can be partitions on a GPT disk, MBR disks, PXE entries etc.

[root@system directory]# efibootmgr -v
BootCurrent: 0002
Timeout: 3 seconds
BootOrder: 0003,0002,0000,0004
Boot0000* CD/DVD Drive BIOS(3,0,00)
Boot0001* Hard Drive HD(2,0,00)
Boot0002* Fedora HD(1,800,61800,6d98f360-cb3e-4727-8fed-5ce0c040365d)File(\EFI\fedora\grubx64.efi)
Boot0003* opensuse HD(1,800,61800,6d98f360-cb3e-4727-8fed-5ce0c040365d)File(\EFI\opensuse\grubx64.efi)
Boot0004* Hard Drive BIOS(2,0,00)P0: ST1500DM003-9YN16G .
[root@system directory]#

Helpful Sources:

Invoke SQL Queries

Running SQL queries from powershell can be extremely powerfull. However some implementation I have see often result with off object types such as DBNull. This is my method that seems to work quite nicely for me.

$tsqlQuery = @"
Select AccountName
From SUSDB.dbo.tbDownstreamServerTarget
Try {
Catch  {

$command = $conn.CreateCommand()
$command.CommandText = $tsqlQuery
$table = new-object System.Data.DataTable
$data = $command.ExecuteReader()


Working with .NET Types, Assemblies, Etc

Here are some tricks and snippets that I find quite useful when dealing with objects, types and using .NET assemblies, etc.

Find all loaded Assemblies



Adding Assemblies

Add-Type -AssemblyName Windows.Forms

Add-Type -AssemblyName "Microsoft.SqlServer.SMO, Version=, Culture=neutral, PublicKeyToken=89845dcd8080cc91"

#With Partial Name (deprecated) 

#Targeted version by file(Much better in terms of reliability) 

#Targeted version by Assembly String 
[Reflection.Assembly]::Load('Microsoft.UpdateServices.Administration, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35') 

Note: You cannot unload assemblies from the application domain so once they’re in, they’re in. Note also to find the Assembly Strings use:

[AppDomain]::CurrentDomain.GetAssemblies() | Select FullName


Find Method Overloads

If you call a method without the brackets powershell will show you a list of all the different overload options available


static int Compare(string strA, string strB)
static int Compare(string strA, string strB, bool ignoreCase)


Find Hidden Properties
There’s often more than get-member will normally show, hidden properties and methods:

$p | GM -Force


Get a List of TypeNames


PackageManager Part 2: Client Side Setup

Install the NuGet/Chocolatey Provider

With Windows 10 Package Manager almost runs out of the box. If you run Get-PackageProvider from powershell on a fresh Windows 10 instance you will see the out of box providers. There are two more important providers that we will want to setup

  • Nuget – Mainly used to pull code such as Modules from the PSGallery
  • Chocolatey – Build on top of Nuget to pull and install binaries

I expect in the future other plugins will be available as well as being able to create custom providers as explained here

#Install NuGet Provider

Get-Package -ProviderName Nuget -ForceBootstrap

#Install Chocolatey Provider

Get-Package -ProviderName Chocolatey -ForceBootstrap


For me, since I have my own internal Repo, I would like a way to install these providers offline. At time of writing it doesn’t appear this is possible so I may have to fall-back to using the chocolatey client until WMF 5.0 and the Chocolatey Provider are officially released.


Configure a Package Source

Register-PackageSource -Name InternalRepo `
                       -Location http://svr1/packagemanagerrepo/nuget `
                       -ProviderName chocolatey `


You should now be able to query your Local Repo for your custom packages. Note for the below I have merely copied some existing packages from pending me crafting my own.



PackageManager Part 1: Setting Up an Internal Repo


With Powershell 5 comes Package Management which has the potential for becoming the number one tool used when it comes to application deployment. Rather than relying on community written packages from say, Chocolatey, I want to be able to set up my own internal repo to have full control over packages and how they work.

Setting up the NuGet Server

Firstly get a up to date instance of 2012 R2 running and install IIS and ASP .NET 4.5

Install-WindowsFeature -Name Web-Server,Web-Asp-Net45 -Verbose


Next get Visual Studio installed and up to date and create a new empty ASP.NET Web Application. For this I will be using Visual Studio 2015 Community Edition which can be downloaded here

  • Start > New Project > ASP.NET Web Application Empty

Now we need to install Nuget.Server via NugetPackage Manager.

  • Right Click References > Manage NuGet Packages > Search for Nuget.Server > Install

If you have trouble finding the package you might want to check your settings. In Options under NuGet Package Manager under Package Sources you should have the link (This may vary on between Visual Studio versions)

Once installed you can edit Web.Config to point to a specific directory for Package

<add key="packagePath" value="" />


Building the Solution

Now we Need to configure Visual Studio to use IIS instead of its internal IIS Express. To do this:

  • Right Click Project Name > Properties
  • Click Web > Under Servers Select “Local IIS” > Click Create Virtual Directory

Finally the Solution is ready to Build:

  • Right Click Project Name > Build

You should now see your application in IIS and be able to browse to the website.


Your Nuget Server is now setup and running. To Test you can drop some previously downloaded .nupkg files into your package path defined in Web.Config. You should be able to browse these at http://localhost/<ProjectName>/nuget.