SQL Server Changing Passwords and an SSPI Context Error

The other day I encountered a login error when connecting to a SQL Server. The circumstance seemed strange compared to similar errors described online with many of those seeming rather complicated to find the real solution. Since this server had been around for awhile, it was unlikely that some major Active Directory change would be necessary to resolve the issue.

This SQL Server was part of an Availability Group, and the connection worked fine when connecting using the Server/Instance name, however, when attempting to connect via the Listener, the following error occurred.

Cannot connect to Server/Instance.
The target principal name is incorrect.
Cannot generate SSPI context (Microsoft SQL Server)

Articles online indicated this was an SPN, Kerberos, and/or Active Directory issue, and something needed to be reset, but the only way to know for sure was to continue down a long troubleshooting list. Luckily, the problem was simpler than that, but still very strange.

I had reset the service account passwords the afternoon before this error became apparent. Each service was restarted afterwards to verify the change worked properly and SQL had been successfully connected to. Everything seemed fine from my perspective.

The next day, some users attempted to connect using the Listener and that’s when the errors started. I don’t normally connect via the Listener, so I hadn’t thought to check that, didn’t think it would be necessary.

Troubleshooting the easy solutions first seemed like a good idea, so I decided to try restarting the SQL service, which failed everything to another server in the cluster immediately. The services came online, and now both the instance and Listener could be connected to. OK, well probably sort of solved.

I failed it to a third node in the cluster, everything still worked great. Cool. This was looking even better.

Next I failed it back to the original node.  This time, the SQL Service came online, but not the Listener. Strange, how did it work in the first place? Everything was running on that server before I restarted the service, even if it wasn’t running correctly. I reset the passwords in SQL Configuration Manager, and then restarted the services. Everything worked perfectly now.

In summary, somehow all the services restarted on the server after the password change, but the Listener had a bad password and was not allowing connections. When I attempted to restart the Listener again, it failed until the password was corrected. I still don’t know how this happened, but it’s a good reminder to be especially careful when changing service passwords.  Changing passwords on a cluster can be even more dangerous since you have extra services to update that may not even be running on the server at the time, so verifying everything went smoothly can take a few extra steps.

Tales of when a Log Fails to Shrink in an Availability Group

I received a report that one of my servers had 7% free space on its log drive. Sounded like something fun to resolve. I checked on what was going on and found a log file that was 99% free and a hundred gb in size. While shrinking a log file is not a good practice and I’m not advocating it by any means because it’s just going to grow again and your storage is there specifically to hold logs, this situation was a out of the ordinary and we needed space.

The problem was, this log would not shrink. It was being extremely uncooperative. I took a backup, log backups, multiple shrink attempts, but it wouldn’t budge. The message returned was a big clue though.

<code>The log for database ‘dbname’ cannot be shrunk until all secondaries have moved past the point where the log was added.</code>

As you might have guessed, this server was a SQL Server 2012 instance and in an Always On Availability Group. The database in question could not shrink because it was participating in the AG.

It wasn’t an ideal fix, but by removing the database from the Availability Group, I was able to perform a log shrink to get the size to a more manageable amount. No, I did not truncate it to minimum size, I adjusted it to a reasonable amount based on its normal work. I didn’t want the log to just have to grow again. The shrink worked flawlessly, and with adequate drive space, I attempted to add the database back to the AG via the wizard.

The AG wizard refused to help. The database was encrypted and the AG wizard will not let you add a database if it is encrypted. No explanation why, it just doesn’t like that. You can add an encrypted database to an AG via script though. You can even script the change from the wizard by using a non-encrypted database then changing the database name in the scripted result. The resulting script is exactly what the AG wizard would do, it just cannot execute it automatically.


ALTER AVAILABILITY GROUP AgName
ADD DATABASE DbName;
GO

With free space and an encrypted database safely back in my AG, I was off to new adventures!

Recovery Model Comparison

This simple blog will cover the bulk of information you’ll need to make a fully informed decision on which Recovery Model to choose for your SQL Server databases. I know, that was horrible, but I couldn’t resist. Recovery Models are normally considered one of the more basic things to know about SQL Server, but refreshers are never a bad idea. Furthermore, if you are like me, you haven’t actually dealt with the Bulk Logged model much.

Simple Recovery Model

The simple model constantly reclaims log space in order to keep storage requirements minimal. Transactions are still written to the log; however, once a transaction completes and the data is written to the data file, the transaction log is truncated. This keeps the log small and allows the space to be reused.

While this model takes the least amount of planning and work to maintain, it also provides the least amount of protection. You cannot perform transaction log backups, so any changes since your last backup will be lost in the event of a disaster. There is no possibility to recover to a specific point in time either, and high availability solutions cannot be implemented. This model is best used for unimportant or test data. Data that rarely changes or would have no impact if data loss occurs between backups are also good choices for Simple recovery.

Supported Backups:

  • Full
  • Differential

Unavailable:

  • Log shipping
  • Database Mirroring
  • Always On Availability Groups
  • Point in time Restore

Bulk Logged Recovery Model

This model performs minimal logging of bulk operations in order to reduce the size of the transaction log. This means that the model does support the use of transaction log backups. In fact, it’s essentially identical to the Full Recovery model except for omitting bulk operations from the log. Even so, you shouldn’t use this model long term, just go full.

Bulk operation examples:

  • Bulk Imports (BCP, Bulk Insert)
  • CREATE/ALTER INDEX
  • SELECT INTO
  • WRITETEXT & UPDATETEXT

Since Point in Time recovery is not supported in this model, data can be lost, but only if the transaction log is damaged or bulk logged operations occur (since those aren’t logged). This recovery model is best used during index maintenance or when inserting massive amounts of data. Your transaction log won’t blow up in size, and you’ll be able to flip back to Full Recovery easily after your maintenance is complete (more on that later).

Supported Backups:

  • Full
  • Differential
  • Transaction Log

Full Recovery Model

With the Full Recovery model, you can prevent data being lost due to damage and you finally have the opportunity to restore to a specific point in time. All transactions are stored in the log until a log backup occurs or the log is truncated. If you fail to schedule regular log backups, your transaction log will grow till storage becomes a major problem. After a transaction completes, the data from the log is then written into the data file.

If the tail of the log is damaged, you can still have some data loss up to the last transaction log backup. This is the most appropriate recovery model to use for production data as its the only way to provide point in time recovery.

Supported Backups:

  • Full
  • Differential
  • Transaction Log

Changing Recovery Models

The command to flip recovery models is very easy, just issue an alter command on the desired database and set it to the desired model.


USE [Master];
ALTER DATABASE TestDB SET RECOVERY FULL;

Flipping between recovery models requires a few considerations, which I’ve summed below.

Type Notes
Full to Bulk Backup log for a last recovery point then switch to Bulk Logged Recovery; backup log again after the switch.

Backup Log -> Switch to Bulk-> Backup Log

Bulk to Full Switch back to Full Recovery after Bulk Logged operations complete; backup the log.

Switch to Full -> Backup Log

Simple to Full/Bulk Switch to Full or Bulk Recovery; perform a Full or Differential Backup to start the log chain. Schedule or reenable log backups. Change takes effect after first backup.

Switch to Full/Bulk -> Data Backup -> Backup Log

Bulk/Full to Simple Disable Log Backups, breaking the log chain. Switch to Simple Recovery; perform a Full or Differential Backup.

Disable Log Backups -> Switch to Simple -> Data Backup

 

SQL Server Troubleshooting: Token-based Login Failure

In continuation of the forced mirroring failover procedure I posted last week, this post describes the another level of pain you may encounter.

After forcibly failing a mirroring session to the secondary server, users were unable to connect to the SQL Server. The SQL Error Log was full of the same error message.

Error_TokenBasedAuthentication

Login failed for user

 

Our SQL Server uses Windows Based Authentication so that was a major hint. The solution was actually incredibly easy. Originally I assumed that an account was locked out or perhaps missing from the mirror server – who knew how long ago everything had been correctly synched.

There are two likely solutions to this issue.

UAC is blocking the connection attempts

I find this to be a less likely scenario, and since changing UAC settings require a server restart, I highly suggest testing the next method first. It will probably solve your problem without restart. If the issue is UAC, the server is probably a recent setup, otherwise I feel you would have noticed this.

Recreate the User Account

More likely, the account in question needs to be dropped and recreated in SQL Server. If the user connects though SQL using a group or service account, you’ll need to track down the appropriate account that they are a member of in Active Directory. Ask your AD administrator to look up the groups which the user is a member of if you don’t have access yourself, or intuitively already know the answer.

Once you have the correct account, you’ll need to take note of the existing permissions via screenshot, scripting the login as a create to statement, or just remembering everything. Delete the existing account in the server-wide Security as well as Database-level Security. Deleting an account in server Security does not cascade to the Database level. In fact, you’ll be warned of this by SQL Server when you attempt the delete.

Now you’ll just need to recreate the account providing the correct permissions and any relevant mappings. The token-based server access validation errors should be solved now.

SQL Server Forced Mirror Failover

Failing a SQL Server Mirror over is normally an incredibly simple process. If you use the GUI, you can accomplish the failover in moments. However, it is not always a planned event. Yesterday we had just such an issue.

If your principal server goes down and you do not have a witness setup for automatic failovers, you will need to perform a forced manual failover on the mirror server. There is no way to do this via the GUI, scripts only. Opening the database properties on a mirror results in a greyed out pane of information and zero options.

MirrorProperties.png

Not very helpful when this is all you have to work with.

Forced Failover Method 1 (PREFERRED)

In a perfect world, you will be able to connect to the mirror database and quickly run the following script, substituting in your database name. This will force the mirror to become the principal, and voila, you are up and running again. If you use an alias in Active Directory, just make sure the alias is changed to point to the new server.

ALTER DATABASE dbname
SET PARTNER FORCE_SERVICE_ALLOW_DATA_LOSS

However, in the less perfect world we live in, you may receive the following error:

Cannot alter database because database is not in the correct state to become the principal

This error results for one of two reasons:

a. Full Safety Mode

If the database is not in Full Safety Mode, otherwise known as Synchronous commit, the failover will error. If your mirror database is in a remote location, it’s probably not in Full Safety. Asynchronous mirrors are High Performance for data transfers, but Low Performance when it comes to forced failovers. Try running the following to force the database into Full Safety mode, which will then allow the failover to occur.

ALTER DATABASE dbname
SET PARTNER SAFETY FULL

However, this script will fail too, if the principal database is still available to SQL. It might still fail regardless. It did for me, so I had to resort to Method 2 below.

b. Principal Database can still be reached by Mirror

If your principal database is still accessible by the SQL mirror, but having too many connectivity issues for you or users to actually reach it, you may have to shut down the principal in order to run commands from the mirror database. This is a bit of a strange scenario, but if your principal cannot be connected to, you’ll need to failover somehow. In this situation, someone will have to make the call of what to do.

Note: If you failover using the forced manual failover script, you will be able to manually resume mirroring after the principal server becomes available again. All your mirroring settings will be saved for the database, making failback relatively painless.

Break Mirror Method 2 (LAST RESORT)

If Method 1 of forcing the failover does not work, even after trying to set the database to Full Safety Mode, you can run the following script and break the mirroring session and restore the mirror with recovery. This is the quick and dirty method.

ALTER DATABASE dbname
SET PARTNER OFF
GO
RESTORE DATABASE dbname
WITH RECOVERY
GO

If you destroy the mirroring session in this manner, you will not be able to resume mirroring. All the settings will be erased and you will have to recreate mirroring from scratch. You’ll have a usable database though, and if the principal died suddenly, you shouldn’t actually have much, if any, data loss.

RAID Storage and SQL Server

If you are a SQL Server Administrator, eventually you are going to need to request specific storage for your servers. Depending on the setup at your current company, that is all handled in the background by your Storage Administrators, but if you have the power or are asked for your opinion, knowing about RAID (Redundant Array of Independent Disks) technology is important. You can find full technical explanations on the web for this, but I’ll cover the basics from a SQL Server perspective.

RAID Overview

RAID uses multiple hard drives to improve availability and/or performance. RAID can overcome I/O bottlenecks that would result from using a single disk, provide resiliency from data loss through mirroring, and remove a single point of failure from a single drive being used.

To understand RAID, there are three terms we need to define first.

Mirroring is an idea you should understand intuitively, but perhaps not the exact details in relation to RAID. Disk Mirroring replicates data from one disk to another, providing a second copy for disaster recovery, and thus requiring at least two disks. RAID mirroring is performed synchronously and mirrored data can be read from either disk. More on Mirroring.

Striping means that the data is being separated onto multiple drives in a consecutive fashion. By separating the data onto different drives, the I/O load can be balanced across the disks and read times are faster. The more disks that the data is striped across, the faster the data throughput will be; however, if one device fails, since the data is spread evenly across all the disks involved in the striping, all the data will be corrupted and unable to be read. More on Striping.

Parity is probably the hardest term to understand or explain. In the most basic, mathematical sense, parity refers to whether an integer is even or odd. For computing, the definition is specifically whether the total value of 1’s occurring in a given binary number is even or odd. For RAID, parity data bits are a combination of existing data bits to provide redundancy. In the case of any single drive failure, the remaining data can be combined with the parity data to reconstruct the missing data. More on parity.

RAID Levels

Although there are more levels of RAID, for instance, 2, 3, 4…etc., they are rarely used, especially for SQL Server. I’ll just be explaining the four main types here.

 

Raid0RAID 0 (Striping) This basic form of RAID stripes data across multiple disks. Reads and writes occur simultaneously across all disks involved and thus provides faster reads and writes compared to a single disk. The more disks involved, the faster the performance. This creates multiple points of failure though, and is not really recommended for database use due to the increased vulnerability.

Pros: Improved Performance of Read and Writes
Cons: No Redundancy and any drive failure corrupts all data.

Raid1RAID 1 (Mirroring) This RAID level mirrors, or duplicates, data between a minimum of two disks. Mirroring requires 50% more storage since the mirror is an exact copy of the original data. Read speeds are faster since any disk can respond to a read request. Write speeds are reduced due to copying the data to multiple locations. Read times can be as fast as the fastest drive, while write times are often as slow as the slowest drive. If you need a relatively cheap method to protect your data, this is a good option. If one drive fails, you still have a perfect copy of the data on the other.

Pros: Redundancy, Faster Reads
Cons: 50% Extra Storage, Slower Writes

Raid5RAID 5 (Striping & Parity) This is the likely the most common type of RAID used, but requires at least three disks. Data and parity calculations are striped across all the disks. Since data is not mirrored, less storage is ‘wasted’, resulting in only a minimum of 1/3rd (1 / total # of drives) of the storage space used for redundancy. In the event of a drive failure, the data can be reconstructed using the parity data, but at a cost. There is a significant impact when one disk fails due to the parity data reconstruction overhead. Losing the 2nd drive in a three disk RAID 5 configuration will result in the entire array going offline and data being lost. Always replace after the first failure as soon as possible! Since write speeds are slower with RAID 5, it is not the best choice for Transaction Logs or Data Files. Backup drives are a prime candidate for this RAID level though since write speed is not as important.

RAID 6 is growing in popularity; it’s identical to RAID 5, except it adds an extra disk with another set of parity data. This RAID level requires a minimum of four disks and can handle up to two drive failures.

Pros: Fault Tolerance, Fast Reads (until a drive fails)
Cons: Slower Write Speeds, More Expensive than RAID 1 or 0

RAID 0+1 (Mirroring & Striping) Data is striped and subsequently mirrored in this RAID level. This incurs the 50% storage redundancy loss. There is fault tolerance for a single drive failure, but this reduces the RAID to essentially RAID 0 with no redundancy.

Raid10RAID 10 (Striping & Mirroring) Data is mirrored first and then striped with this method. As long as one drive from each side of the mirror remains, no outage will occur. With more fault tolerance, this is the preferred high-end method over RAID 0+1.

Pros: Fault Tolerance, Best Read/Write Speeds
Cons: Lots of Disks and Lots of Money

Both RAID 0+1 and 10 are hybrid RAID levels and provide the best read and write performance, but double the required storage. This is what everyone wants but not everyone can afford. It combines the best of the previous configurations but requires many disks and therefore a big budget. These RAID levels are best suited for high traffic, like your Data and Log files. If you cannot afford RAID 10, RAID 5 is a decent alternative though.

String Manipulation in T-SQL or PowerShell

I don’t often have to do string manipulation in T-SQL, but when I do, it’s rare enough that I don’t remember the syntax. I’ve done it in the past, so I keep a script handy with a working example though. I recently added to that example after wanting to know not only the file names, but also the folder location.  This article will cover getting a string after a specific character. This can be used to get a folder path or just a file name.

The scripts are cumbersome, to say the least, in SQL. Some people use this problem as an interview question, but I feel like the only way you’d be able to answer this off the top of your head is if you did it on a daily basis. While I remember the general idea of switching the order and checking the text after a specific character, I couldn’t write this from scratch if my life depended on it. Even with intellisense helping me out, it’s hard for me to remember how CHARINDEX can be manipulated to get the correct answer here, since I use it so rarely. I can pull it from my script library quickly though, and now I’ll be able to refer to my own blog as well. (I do that every time I set up Transparent Data Encryption).

I’ve commented into the code a quick explanation of what each command does in order to find the correct answer. Then I found out WordPress refused to color code the comments correctly. So I removed them. Now I’ll just explain it instead.

SQL can do it

The below script will find the File Name of each master file by reversing the direction of the text and using CHARINDEX to get the length of the string until it reaches a specific character, in this case, the \. The RIGHT command then returns the number of characters at the end of the original string based on the inner index command. The -1 removes the \ from the returned results.

To get just the folder path, perform the same command again, but this time issue a REPLACE command for the FileName, replacing it with an empty string.

In order to get a single folder, you combine the steps above to get just the FolderPath, and then once again get the last value after the \. This returns the final folder in the list. This may not be the optimal way to find the results, but it uses the same methods in order to keep it relatively simple.


SELECT
mf.name
,mf.physical_name
,[FileName] = RIGHT(physical_name,CHARINDEX( '\', REVERSE(physical_name)) -1)
,[FolderPath] = REPLACE(physical_name, '\' +RIGHT(physical_name,CHARINDEX( '\' ,REVERSE(physical_name)) -1), '')
,[FileFolder] = RIGHT(REPLACE(physical_name, '\' +RIGHT(physical_name,CHARINDEX( '\' ,REVERSE(physical_name)) -1),''),CHARINDEX( '\' ,REVERSE(REPLACE(physical_name, '\' +RIGHT(physical_name,CHARINDEX( '\' ,REVERSE(physical_name)) -1), ''))) -1)
FROM sys.master_files mf

PowerShell just does it better

On the other hand, I feel that the syntax to perform the same string manipulation is much more intuitive in PowerShell. It’s definitely less code, although I will concede that I assigned variables here, while with the SQL approach, I chose to do it all in a single statement. No variable assignments makes it much harder to read.

Leveraging the SPLIT command and LENGTH is about all you need for PowerShell. SPLIT creates an array from the string, with each new value separated by the character supplied in the parenthesis, that \ again. Returning the array value based on the length of the newly split variable returns any folder level or the file name alone, it just depends on the number you subtract from the length of the split.


$FullPath = "C:\Users\Test\Desktop\Text.txt"

$Split = $FullPath.Split('\')
$File = $Split[$Split.Length-1]
$LastFolder = $Split[$Split.Length-2]
$FolderPath = $FullPath.Replace($File,'')

$FullPath
$FolderPath
$LastFolder
$File

 

Getting Jobs by Run Time

New month, so I have to make sure I write something about SQL again, instead of babbling about PowerShell all the time! I manage a lot of servers, so scripting out something that I can run on all the servers rather than pulling up the GUI on one server at a time is always important.

Checking to see when jobs ran on a single server is a pretty quick task. You just have to pull up the Job Monitor, and you’ll have times and even more information easily accessible. Doing that on a dozen servers is tedious though. Doing it on a hundred would be a nightmare. Beyond that, checking specific times through the Job Monitor can be a bit of a hassle, as it displays a summary of the last execution times. What if you want to know what happened 5 hours ago, regardless of whether jobs have ran since then?

You can query SQL for job history, and with a lot of time conversions,  you can get a list of jobs that ran within a specific time period. I use this to narrow down what was happening on a box while troubleshooting. If you need to review long running jobs over a period of time, compare run times of jobs across servers, or just see what jobs might have been slowing performance at 3:03 am, this script can help you out.

The script takes a start and end time and checks for any jobs that started or started earlier but were still running by your supplied start time.

Calculating the Start and End times from job history is ugly. You’ll see that the datetime conversions below are disgustingly complicated. The short and sweet explanation is that date, time, and duration are all integer values. Because Microsoft thinks everything should be a number. To properly calculate date and time, then display it in a readable format, you have to stuff in colons and and use modulus division to get the hours, minutes, and seconds displaying correctly. If you really like math (I do!), that’s a technical explanation of modulo.

The run status will tell you if the job succeeded or not. Now, according to TechNet SQL Server 2000, there is a run_status of 4 meaning the job is still In Progress. Newer versions don’t mention a run_status of 4 and I’ve never seen run_status of 4 in a result set, even though I had jobs running at the time. Will it work for you? I really don’t know. I don’t trust it though. For newer versions of SQL, I’d suggest running a system stored procedure instead to get currently executing jobs, the code for which I’ve provided after my script.


DECLARE  @StartTime INT = '195500' --HHMMSS
,@EndTime INT = '195600' --HHMMSS
,@Date  INT = CONVERT(INT,CONVERT(VARCHAR,GETDATE(),112))
,@MinTime INT = 10  --Job Run Time Minimum in Seconds

SELECT
[JobName] = j.name
,[StartDate] = CONVERT(VARCHAR,CONVERT(DATETIME,RTRIM(jh.run_date)),101)
,[StartTime] = CONVERT(TIME(0),STUFF(STUFF(REPLACE(STR(RTRIM(jh.run_time),6,0),' ','0'),3,0,':'),6,0,':'))
,[EndTime] = DATEADD(second,run_duration%100,DATEADD(minute,run_duration / 100%100,DATEADD(hour,run_duration / 10000,CONVERT(TIME(0),STUFF(STUFF(REPLACE(STR(RTRIM(jh.run_time),6,0),' ','0'),3,0,':'),6,0,':')))))
,[Duration] = STUFF(STUFF(REPLACE(STR(jh.run_duration,7,0),' ','0'),4,0,':'),7,0,':')
,[RunStatus] =
CASE jh.run_status
WHEN 0 THEN 'Failed'
WHEN 1 THEN 'Succeeded'
WHEN 2 THEN 'Retry'
WHEN 3 THEN 'Cancelled'
WHEN 4 THEN 'InProgess'
ELSE 'Unknown'
END
FROM msdb.dbo.sysjobs j
INNER JOIN msdb.dbo.sysjobhistory jh ON jh.job_id = j.job_id
WHERE run_date = @Date
AND step_id = 0 -- Entire Job
AND run_duration > @MinTime
AND
(
( run_time >= @StartTime AND run_time <= @EndTime )
OR
( Run_Time < @StartTime
AND
DATEADD(second,run_duration%100,DATEADD(minute,run_duration / 100%100,DATEADD(hour,run_duration / 10000,CONVERT(TIME,STUFF(STUFF(REPLACE(STR(RTRIM(jh.run_time),6,0),' ','0'),3,0,':'),6,0,':'))))) > CONVERT(TIME,STUFF(STUFF(REPLACE(STR(RTRIM(@StartTime),6,0),' ','0'),3,0,':'),6,0,':'))
)
)
ORDER BY Run_Date,Run_Time

Note that jobs that are still executing do not have an entry in job history. So combining this script with the executing jobs system stored procedure is a good idea if you are dealing with a recent time window especially. You can add the parameter @Category_Name = ‘jobs from msx’ to return only the jobs managed by your Central Management Server.


EXEC msdb.dbo.sp_help_job @Execution_Status = 1

PowerShell: Custom Objects and Tables

Returning data in PowerShell is normally an automatic feature, but oftentimes you will need to specially format or splice some data into a single table. There are a lot of ways to go about it, but none of them are exactly intuitive. I’ll list my four preferred methods, all using a moderately complex statement that I’ve previously mentioned.

Method 1 – String

The first method is probably the easiest one to remember syntax-wise. You create an empty variable, pipe column names to it, add results to the table, and finally return the result. This will leave the data as a string, which has certain pros and cons. It supports multiple rows of data just fine.


## Simple Example - String ##
$Results = '' | SELECT ComputerName,UserName,Processes,Services
$Results.ComputerName = $Env:ComputerName
$Results.Username = $Env:UserName
$Results.Processes = (Get-Process *).Count
$Results.Services = (Get-Service *).Count
$Results | Format-Table -Autosize

SelectResults

A sample result of the custom table

A much more complex, but realistic, example is below. I’m querying network adapter data to get the name of the adapter and match it to the IP and Mac Addresses, then returning the result set.


## Advanced Example - String ##
$Adapters = Get-WmiObject win32_networkadapter | WHERE {$_.NetConnectionId -NOTLIKE '' -AND $_.NetConnectionId -NOTLIKE 'Local Area Connection*' }
#  $ErrorActionPreference = 'SilentlyContinue'
$Info = ForEach($Adapter in $Adapters)
{
$Results = '' | SELECT ComputerName,AdapterName,IpAddress
$Results.ComputerName = $Env:ComputerName
$Results.AdapterName = $Adapter.NetConnectionId
$Results.IpAddress = (Get-WmiObject win32_networkadapterconfiguration -Filter "Index = $($Adapter.DeviceId)").IpAddress
$Results
}
$Info | FT -auto

I’ll continue to show the simple and complex example for each custom object method.

Method 2 – Hashtable to PsCustomObject

Method 2 is very similar to Method 1. It’s actually the first one I learned, so I use it the most. This creates a HashTable instead of a String then converts that to a Custom Object so it will display nicely as a table. Older versions of PowerShell might not format it to look like a table, instead listing out the results in 2 columns. One of my machines displays as a list, while the other, with a newer version of PowerShell, displays as a table.

## Simple Example - HashTable to PsCustomObject ##
$Results = @{}
$Results.ComputerName = $Env:ComputerName
$Results.Username = $Env:UserName
$Results.Processes = (Get-Process *).Count
$Results.Services = (Get-Service *).Count
[PsCustomObject]$Results | Format-Table -Autosize

The advanced hash table isn’t overly complicated, the setup is only a bit different.

## Advanced Example - HashTable ##
$Results = @()
$Adapters = Get-WmiObject win32_networkadapter | WHERE {$_.NetConnectionId -NOTLIKE '' -AND $_.NetConnectionId -NOTLIKE 'Local Area Connection*' }
ForEach($Adapter in $Adapters)
{
 $Results += [PSCustomObject] @{
 ComputerName = $Env:ComputerName
 AdapterName = $Adapter.NetConnectionId
 IpAddress = (Get-WmiObject win32_networkadapterconfiguration -Filter "Index = $($Adapter.DeviceId)").IpAddress
}#EndCustomObject
}#EndForEach
$Results | FT -auto

Method 3 – Custom Table

This method is very syntax heavy with lots of punctuation. It’s a good method to create custom column names in a table, as you can just use the @{l='ColumnName';e={$Value}} format for any column in the table. Just remember l is for label and e is for expression,  and you can mix custom column names with default ones.

## Simple Example - Custom Table ##
Get-Process * | SELECT -First 1 |
Format-Table @{l='ComputerName';e={$Env:ComputerName}},@{l='UserName';e={$Env:UserName}},
@{l='Processes';e={(Get-Process *).Count}},@{l='Services';e={(Get-Service *).Count}} -AutoSize

The advanced example is very similar to the simple example for this method. Just remember to keep track of those brackets. They can be overwhelming.

## Advanced Example - Custom Table ##
Get-WmiObject win32_networkadapter |
WHERE {$_.NetConnectionId -NOTLIKE '' -AND $_.NetConnectionId -NOTLIKE 'Local Area Connection*' } |
Format-Table @{l='ComputerName';e={$Env:ComputerName}},@{l='Adaptername';e={$_.NetConnectionId}},
@{l='IpAddress';e={Get-wmiObject win32_networkadapterconfiguration -Filter &amp;quot;Index = $($_.DeviceId)&amp;quot; |
Select-Object -Expand ipaddress}} -auto

Method 4 – Add Member

Coders may tell you that this is the preferred method, but I feel the syntax and execution is too troublesome for just displaying some query results. This method returns as a PSCustomObject. There is a lot of syntax here, but it’s mostly repetitive and easy to read. You also have the benefit of explicitly defining the MemberType with this method, although you’d normally just leave it as NoteProperty. My major complaint with this is that it does not easily support multiple rows of data…I’ll explain in the advanced example.

$Results = New-Object PSObject
Add-Member -InputObject $Results -MemberType NoteProperty -Name ComputerName -Value $Env:ComputerName
Add-Member -InputObject $Results -MemberType NoteProperty -Name UserName -Value $Env:UserName
Add-Member -InputObject $Results -MemberType NoteProperty -Name Processes -Value (Get-Process *).Count
Add-Member -InputObject $Results -MemberType NoteProperty -Name Services -Value (Get-Service *).Count
$Results | Format-Table -Autosize

The advanced example requires some consideration. In this case, you’ll need to make sure that the New-Object command is inside of a ForEach loop, and thus creating a new set object each time. Otherwise, you’ll just repeat the same first line of data over and over. The Add-Member command creates columns, so if they already exist…you just get errors. With the looping, we are generating a few tables and splicing them into one. At the end of the ForEach, we assign the generated data into another variable to populate the custom table and display the data.

$Adapters = Get-WmiObject win32_networkadapter | WHERE {$_.NetConnectionId -NOTLIKE '' -AND $_.NetConnectionId -NOTLIKE 'Local Area Connection*' }
Clear-Variable Results
ForEach($Adapter in $Adapters)
{
$obj = New-Object PSObject
Add-Member -InputObject $obj -MemberType NoteProperty -Name ComputerName -Value $Env:ComputerName
Add-Member -InputObject $obj -MemberType NoteProperty -Name Adaptername -Value $Adapter.NetConnectionId
Add-Member -InputObject $obj -MemberType NoteProperty -Name MacAddress -Value $Adapter.MacAddress
Add-Member -InputObject $obj -MemberType NoteProperty -Name IpAddress -Value (Get-WmiObject win32_networkadapterconfiguration -Filter &amp;amp;amp;quot;Index = $($Adapter.DeviceId)&amp;amp;amp;quot;).IpAddress
$Results += $obj
}#EndForEach
$Results | Format-Table -Autosize

In my opinion, use whichever method fits your needs and is the easiest to implement and remember. Personally, I use method 1 and 2 the most.

PowerShell: Get Network Adapters with WMI

I thought I had already blogged about this topic, but evidently not. I was in the middle of finishing another post with the intention of linking back to this one, when I realized it did not actually exist.

The script I’m about to explain actually exists in PowerShell now, but only if you are using Windows 8 or Server 2012. I don’t have that option most of the time professionally, so I can’t use Get-NetAdapter or Get-NetIpAddress. Too bad, it would solve all some one of my problems. I’d still have to splice those two commands together to get all the results I want. The work-around in older systems requires combining two wmi classes instead.

The snippet below will give you all the properties each wmi query can return. I need to match IP Addresses with the name of the adapter. NetworkAdapter can get me the IP, but NetworkAdapterConfiguration has the name. After reviewing the properties of each command, you might say, “but NetworkAdapter does have the name property!” Sure it does, but what it actually returns is more like the product model information. I want the personalized name of the adapter.


Get-WmiObject win32_NetworkAdapter | GM -Membertype Property

Get-WmiObject win32_NetworkAdapterConfiguration | GM -Membertype Property

The script I have to run is moderately complex. First I’ll get the Adapter information and assign it to a variable. I’m excluding blank Connection Ids and Local Area Connections. I don’t care about default stuff. If you do, just drop the WHERE clause. I then perform a ForEach loop to get the name of each adapter. The IpAddress is matched using the Filter parameter and passing in each adapter’s Deviceid, the unique adapter id that is available in both commands. Without that embedded second wmi query for the IP with the id filter, any data returned couldn’t be trusted.

The results are saved to a string variable which creates a custom table. This is necessary to get the data from two different queries in a single, readable result set.


$Adapters = Get-WmiObject win32_networkadapter | WHERE {$_.NetConnectionId -NOTLIKE '' -AND $_.NetConnectionId -NOTLIKE 'Local Area Connection*' }

$Info = ForEach ( $Adapter in $Adapters )
{
 $Results = '' | SELECT ComputerName,AdapterName,MacAddress,IpAddress
 $Results.ComputerName = ($Adapter.SystemName)
 $Results.AdapterName = $Adapter.NetConnectionId
 $Results.IpAddress = (Get-WmiObject win32_networkadapterconfiguration -Filter &amp;quot;Index = $($Adapter.DeviceId)&amp;quot;).IpAddress
 $Results.MacAddress = $Adapter.MacAddress
 $Results
}
$Info | FT -auto

There are a lot of ways to display and format data with custom objects and tables, and I’ll discuss a few of those in my next post.