PowerShell: Custom Objects and Tables

Returning data in PowerShell is normally an automatic feature, but oftentimes you will need to specially format or splice some data into a single table. There are a lot of ways to go about it, but none of them are exactly intuitive. I’ll list my four preferred methods, all using a moderately complex statement that I’ve previously mentioned.

Method 1 – String

The first method is probably the easiest one to remember syntax-wise. You create an empty variable, pipe column names to it, add results to the table, and finally return the result. This will leave the data as a string, which has certain pros and cons. It supports multiple rows of data just fine.

## Simple Example - String ##
$Results = '' | SELECT ComputerName,UserName,Processes,Services
$Results.ComputerName = $Env:ComputerName
$Results.Username = $Env:UserName
$Results.Processes = (Get-Process *).Count
$Results.Services = (Get-Service *).Count
$Results | Format-Table -Autosize


A sample result of the custom table

A much more complex, but realistic, example is below. I’m querying network adapter data to get the name of the adapter and match it to the IP and Mac Addresses, then returning the result set.

## Advanced Example - String ##
$Adapters = Get-WmiObject win32_networkadapter | WHERE {$_.NetConnectionId -NOTLIKE '' -AND $_.NetConnectionId -NOTLIKE 'Local Area Connection*' }
#  $ErrorActionPreference = 'SilentlyContinue'
$Info = ForEach($Adapter in $Adapters)
$Results = '' | SELECT ComputerName,AdapterName,IpAddress
$Results.ComputerName = $Env:ComputerName
$Results.AdapterName = $Adapter.NetConnectionId
$Results.IpAddress = (Get-WmiObject win32_networkadapterconfiguration -Filter "Index = $($Adapter.DeviceId)").IpAddress
$Info | FT -auto

I’ll continue to show the simple and complex example for each custom object method.

Method 2 – Hashtable to PsCustomObject

Method 2 is very similar to Method 1. It’s actually the first one I learned, so I use it the most. This creates a HashTable instead of a String then converts that to a Custom Object so it will display nicely as a table. Older versions of PowerShell might not format it to look like a table, instead listing out the results in 2 columns. One of my machines displays as a list, while the other, with a newer version of PowerShell, displays as a table.

## Simple Example - HashTable to PsCustomObject ##
$Results = @{}
$Results.ComputerName = $Env:ComputerName
$Results.Username = $Env:UserName
$Results.Processes = (Get-Process *).Count
$Results.Services = (Get-Service *).Count
[PsCustomObject]$Results | Format-Table -Autosize

The advanced hash table isn’t overly complicated, the setup is only a bit different.

## Advanced Example - HashTable ##
$Results = @()
$Adapters = Get-WmiObject win32_networkadapter | WHERE {$_.NetConnectionId -NOTLIKE '' -AND $_.NetConnectionId -NOTLIKE 'Local Area Connection*' }
ForEach($Adapter in $Adapters)
 $Results += [PSCustomObject] @{
 ComputerName = $Env:ComputerName
 AdapterName = $Adapter.NetConnectionId
 IpAddress = (Get-WmiObject win32_networkadapterconfiguration -Filter "Index = $($Adapter.DeviceId)").IpAddress
$Results | FT -auto

Method 3 – Custom Table

This method is very syntax heavy with lots of punctuation. It’s a good method to create custom column names in a table, as you can just use the @{l='ColumnName';e={$Value}} format for any column in the table. Just remember l is for label and e is for expression,  and you can mix custom column names with default ones.

## Simple Example - Custom Table ##
Get-Process * | SELECT -First 1 |
Format-Table @{l='ComputerName';e={$Env:ComputerName}},@{l='UserName';e={$Env:UserName}},
@{l='Processes';e={(Get-Process *).Count}},@{l='Services';e={(Get-Service *).Count}} -AutoSize

The advanced example is very similar to the simple example for this method. Just remember to keep track of those brackets. They can be overwhelming.

## Advanced Example - Custom Table ##
Get-WmiObject win32_networkadapter |
WHERE {$_.NetConnectionId -NOTLIKE '' -AND $_.NetConnectionId -NOTLIKE 'Local Area Connection*' } |
Format-Table @{l='ComputerName';e={$Env:ComputerName}},@{l='Adaptername';e={$_.NetConnectionId}},
@{l='IpAddress';e={Get-wmiObject win32_networkadapterconfiguration -Filter "Index = $($_.DeviceId)" |
Select-Object -Expand ipaddress}} -auto

Method 4 – Add Member

Coders may tell you that this is the preferred method, but I feel the syntax and execution is too troublesome for just displaying some query results. This method returns as a PSCustomObject. There is a lot of syntax here, but it’s mostly repetitive and easy to read. You also have the benefit of explicitly defining the MemberType with this method, although you’d normally just leave it as NoteProperty. My major complaint with this is that it does not easily support multiple rows of data…I’ll explain in the advanced example.

$Results = New-Object PSObject
Add-Member -InputObject $Results -MemberType NoteProperty -Name ComputerName -Value $Env:ComputerName
Add-Member -InputObject $Results -MemberType NoteProperty -Name UserName -Value $Env:UserName
Add-Member -InputObject $Results -MemberType NoteProperty -Name Processes -Value (Get-Process *).Count
Add-Member -InputObject $Results -MemberType NoteProperty -Name Services -Value (Get-Service *).Count
$Results | Format-Table -Autosize

The advanced example requires some consideration. In this case, you’ll need to make sure that the New-Object command is inside of a ForEach loop, and thus creating a new set object each time. Otherwise, you’ll just repeat the same first line of data over and over. The Add-Member command creates columns, so if they already exist…you just get errors. With the looping, we are generating a few tables and splicing them into one. At the end of the ForEach, we assign the generated data into another variable to populate the custom table and display the data.

$Adapters = Get-WmiObject win32_networkadapter | WHERE {$_.NetConnectionId -NOTLIKE '' -AND $_.NetConnectionId -NOTLIKE 'Local Area Connection*' }
Clear-Variable Results
ForEach($Adapter in $Adapters)
$obj = New-Object PSObject
Add-Member -InputObject $obj -MemberType NoteProperty -Name ComputerName -Value $Env:ComputerName
Add-Member -InputObject $obj -MemberType NoteProperty -Name Adaptername -Value $Adapter.NetConnectionId
Add-Member -InputObject $obj -MemberType NoteProperty -Name MacAddress -Value $Adapter.MacAddress
Add-Member -InputObject $obj -MemberType NoteProperty -Name IpAddress -Value (Get-WmiObject win32_networkadapterconfiguration -Filter "Index = $($Adapter.DeviceId)").IpAddress
$Results += $obj
$Results | Format-Table -Autosize

In my opinion, use whichever method fits your needs and is the easiest to implement and remember. Personally, I use method 1 and 2 the most.


T-SQL Tuesday #66: Monitoring with Utility Explorer

TSQL2sDay150x150_388014A5This month’s topic for T-SQL Tuesday is hosted by Cathrine Wilhelmsen (blog | twitter) with the subject of monitoring.

The topic of the month is monitoring. We all monitor something while working with SQL Server: Maybe you’re a DBA who monitors database performance, an SSIS developer who monitors job and package execution, an application developer who monitors queries or a BI analyst who monitors report usage? Do you use T-SQL or PowerShell scripts? Have you created your own monitoring reports or applications? Have you invested in a complete monitoring solution from a vendor? What is monitored manually and what is automated with alerts? If a brand new SQL Server professional in your role asks you for advice, what do you say is the number one most important thing to monitor? Have you ever stayed awake for a week solving a problem that could have been avoided with proper monitoring? Did you implement a monitoring solution that saved your company?

I currently use so many different methods of monitoring, it was daunting to decide what to discuss. I could have written about a high level overview of everything, but that would have been nothing more than mentioning each one in order to fit it into one post. Policies, jobs, traces, audits, extended events, PowerShell…we use them all. Basically if it is not a third-party tool (spending money on any solution is tough to justify here), my office uses it or will be giving it a shot soon. Coincidentally, we just started using another form of monitoring in the past few weeks, so I will discuss some of the trials and tribulations of using the SQL Server Utility.

Utility Explorer provides a simplified dashboard for monitoring and baselining SQL Server CPU and storage resources. This feature is not available for Standard Edition, like almost anything else new and exciting. To utilize this feature, you will need to create a Utility Control Point (UCP). I’d suggest using your Central Management Server to host. Don’t have a CMS? If you have more than two servers, make one. It’s not hard. Unlike a CMS though, the UCP can monitor itself. This is one of the more obscure features added in SQL Server 2008R2, but you can access it in Management Studio by selecting View -> Utility Explorer.


After selecting your instance, you need to specify the account to run the utility collection set. Using the Agent account is the easiest way to set up, but you have to use a Windows domain account as the collection account, regardless of which path you choose.


Select the instance where data will be collected. Again, I suggest your CMS. A utility database will be created to store data, and the default collection period is one year, so make sure you have sufficient drive space or be ready to reduce your retention period. No worries though, this new dashboard will remind you if your disk space starts getting low! The utility database will grow about 2GB, per monitored instance, per year (typically), while the msdb database should grow around 20MB, per instance, per year. Warning: if you also use Data Collector, your data growth can be exponentially higher.


Now the selected instance will be validated. The instance cannot be enrolled if it is already a member of another UCP. I’m using Collection Sets as well, so validation failed on that. You can bypass this issue by simply disabling your collection sets during the enrollment process. That will allow validation to succeed. After enrolled, you can turn collection sets back on.. The two pieces are related (they use the same database) and yet separate enough that they don’t always play well together.


Once you have your UCP created, you can begin enrolling instances, which is a very simple process. Just right-click on Managed Instances, or within the white space of Utility Explorer Content, and select Enroll Instance. Specify the instance name and the agent account, like you did above while creating the UCP, and then validate the enrollment. The instance may throw a warning about WMI. Since it’s just a warning, the enrollment will still succeed. My instances have not had any issues reporting data after throwing this warning. Researching the subject has thus been low priority, but it appears to be a permissions issue. Troubleshooting with that article has shown nothing of concern though…


Once you have a few instances enrolled, open the Utility Dashboard, which will look similar to this.


Select Managed Instances and select the tabs to view CPU usage, storage space, policy configuration for utilization pass/fails, and computer properties. Each enrolled instance will show a pass/fail mark based on the configured policies.

The Storage Utilization tab provides a quick look at your volume free space. You can drill down to see data file and log file sizes as well.
The Property Details tab summarizes the instances software and hardware.

Finally, in the Utility Administration, there are a few settings you need to be aware of. This is where you can set Global policies for storage warnings, set read privileges for user accounts on the Utility dashboards, and specify data retention. I suggest you start small and monitor how fast your sysutility database grows. Like I said earlier, combining this with Data Collection, we had massive data growth, more on that in the future.

Overall, Utility Explorer provides an easy to read dashboard for quick looks at your SQL Servers. If you are a graphical person and can spare the disk space, it’s a great feature. Sure, you can get the current settings via queries, but the really valuable part is the automatic retention which allows quick baselining of your CPU and storage. Capacity planning made easy.


A Snippet on Snippets

I’ve never used Snippets before, mostly because I have always had a personal script repository stored in the cloud along with Intellisense or a SQL Formatting add-on. Lately though, I’ve found it a bit easier to store commonly run scripts within Management Studio using Solution Explorer. I mentioned this in a post that Aaron Bertrand reviewed for me, and he suggested Snippets as an alternative. Snippets are something I’ve been meaning to look into for a while. These are only available in SQL Server 2012 and later, but I’ve had access to that at work for about two years, so I’m long overdue to explore this feature.

If you right-click in Management Studio, you have the option to Insert Snippet, which then provides you with a unique option list of sub folders with snippet files.



Choose a snippet, and you will get a default script similar to this:


Now, the really neat thing about snippets lies in the highlighted sections in the image above. If you update the parameter names in the declaration, they will automatically update in the select statement as well. That could save some time and naming issues while creating a new procedure. The best way to learn exactly how to recreate this feature is by reading the existing snippet files for examples.

I assumed adding a new snippet would be a breeze. I assumed I’d be able to set everything up with a bit of click-guessing, and I’d be snippet shortcutting my way to scripting simplicity in no time. I assumed wrong. Don’t jump into snippets blindly.

I did.

If you click on Tools, you can select Code Snippets Manager. You will then see all the existing folders with their corresponding snippets. I want to create new snippets though, so I’ll try Add, navigate to the folder where I store my scripts, and Select Folder.



This is the Snippet Manager after I have successfully Added my…empty…folder

Alright, it imported the Folder into the viewer, but it’s empty. I know there are SQL scripts in that folder – what’s wrong? Let’s try Import. Navigate to the folder again…and it’s still empty! What’s the big idea!?


After calming down, I looked carefully at the above screen.  If you notice in the bottom right corner, the only available extension is .snippet. I tried to select other types, but that drop down menu is a lie. There are no options. I know that drop down menu is default behavior for Windows, but all it did was lull me into a false sense of freedom for being able to choose other file types.

At this point, it’s obvious that I should not be trying to add snippets on a whim. I highly suggest you do a bit of reading on snippets or at least find the existing snippets and open a file or two to see how they are formatted. You can find the exact location by using the Code Snippets Manager. Click on one of the default folders, and it will display the file path in the Location header. The screenshot of the Manager above has the file path displayed. It will save a lot of time and confusion if you choose to read a bit before just trying to make it work. Learn from my mistakes.

Still, I tried to make it work. I made a new snippet file with only a SQL script inside. By doing this, I was finally able to add a file, but immediately received the following error.


Only then did I give up and read the documentation. I like to poke around at a new feature for a bit before reading all about it, as it helps me understand everything while later researching because I’ve at least seen the options before. This was one of those times where it was a much worse idea than normal to work that way.

Turns out, once I created the .snippet file, I still needed to add XML notation to it so that SSMS could read the file. Any folder I Add will auto populate with new snippets saved to it. That’s pretty handy. I’m no XML wiz, but the below code can be copied and pasted into each snippet, with only minimal, obvious changes.

<?xml version="1.0" encoding="utf-8" ?>
<CodeSnippets  xmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet">
	<CodeSnippet Format="1.0.0">
			<Title>Snippet Title</Title>
			<Description>Code Snippet for X construct.</Description>
			<Author>Author's Name</Author>
                <ToolTip>Condition to evaluate</ToolTip>
			<Code Language="SQL">
--SQL Script Here
$condition$ --This will have a Tool Tip as defined above.
--Code following will appear after selected text.


You should edit the Description tag for each snippet and then just copy and paste the SQL inside the code header. The Author tag will be useful to identify snippets, but once you have a default script saved, that should not require upkeep. The ability to make shortcuts is awesome, but I doubt I’ll remember the keyboard combination and use it enough to warrant a shortcut.

The SnippetType should be Expansion (for a regular snippet) or SurroundsWith. You should look at the existing SurroundsWith snippets before crafting your own if you opt for that choice.  There are variables, like $selected$

, that you should include in a SurroundsWith snippet.  That variable will allow you to select a script, then add the SurroundsWith snippet to place the inserted code around the selected text. Omitting the variable will cause the SurroundsWith snippet to just replace the selected script – not very helpful. To select a SurroundsWith snippet, you will need to right-click in the Query Pane and choose Surround With, which is directly underneath the Insert Snippet command. You won’t have to drill through as many sub menus if you choose SurroundsWith, because there are fewer snippets there.

The Declarations tag allows you to identify objects in the string that will have a tooltip. The ID tag should match a variable name used in the Code tag, with the ToolTip tag being the text you want displayed when hovering over the text. For example, the Create Table snippet has an ID tag of $TableName$, and after inserting the snippet, the ToolTip identifies its purpose.


Finally armed with a valid snippet, I went back to Code Snippets Manager. Now I can see my sample script along with the tagged values.


Adding a snippet for BEGIN TRAN would be a great way to test out the different SnippetTypes. Create the snippet as an Expansion first with no special variables. Next edit the file and change the SnippetType to SurroundsWith and add the $selected$ variable between your BEGIN TRAN and ROLLBACK/COMMIT lines. Highlight a SQL  statement and insert the snippet through the SurroundsWith option. Notice that this time, the snippet indeed surrounded the statement. You can include a variable in an Expansion snippet, but it’s best to categorize correctly, so that you can find a snippet next time with minimal effort.

Summary Snippets definitely have some merits, but it takes a bit of work and understanding to create them. Make sure to source control or at least back them up once created. For anyone with a SQL formatting add-on, snippets probably aren’t overly exciting (but neither was Intellisense). Anyone using the native environment though, it’s your only way to create code shortcuts.

The interface is infinitely more complicated than it should be, but so are many other aspects of Management Studio. You should be able to click Add in the Code Snippets Manager, paste in a script, select Expansion or SurroundsWith as the Snippet Type, and then save it to the menu level of your choice. Alternatively, you should be able to Import saved SQL files to be snippets without any special formatting. Then add in the special variables as desired.

I originally thought snippets could be a replacement for how I stored most of my scripts, but instead, it’s more appropriate as shortcuts for only very popular code. Except for a few constantly used scripts, I doubt the easy access will outweigh the setup costs to use snippets for me. I know it’s not overly difficult to copy and paste the XML into a new file, but snippets are all about being lazy in the first place!