SSMS 2016 Policy Management Quote Parsing Error

I discovered a bug today in 2016 Management Studio when creating and updating policies. It drove me crazy until I realized what was going on, causing lots of lost time. Hopefully this will get fixed fast; we are reporting it immediately because I couldn’t find any references to it already out there. Special thanks to Kenneth Fisher for helping confirm that it wasn’t just affecting me.

The Problem

In the latest release of SSMS 2016, 16.5.1 and newer, policy conditions are removing quotes on each save, causing parse errors.

Vote the Connect up. A fix for this should be released in the next few weeks, but it doesn’t hurt to show your support for Policy Based Management.


I’ll walk through a full, simplified policy creation showing how I discovered the problem, but it can be recreated by just editing a condition.

I created a new policy named Test and a new condition, also named Test. I set the condition facet to server, and input the following code into the field to create an ExecuteSql statement. Everything requiring a quote inside of the string has to have at least double quotes.

Executesql('string',' Select ''One'' ')


Once the code was input, you can see below that the code parsed correctly. SSMS was happy with it, so I hit OK to continue.


I finished creating the policy, everything was still looking fine.


I then went to Evaluate the policy. The policy failed, as I expected. That’s not the point. If you look closely, you’ll notice that the Select One statement is no longer surrounded by double quotes. That shouldn’t have happened.


I opened the Condition itself and received a parse error. Without the required double quotes, the Condition was broken.


I tested this by creating or editing a condition without a policy or evaluating it and got the same results using SSMS 2016 on two separate computers, versions 16.5.1 and 17.0 RC1. When using SSMS 2012 or 2014, the code was not altered, everything worked as it should have. Finally, Kenneth happened to have an older version of SSMS 2016 and could not reproduce my error until he updated to the latest version of SSMS 2016, indicating that it is a recently introduced bug.

And again, if you haven’t already, vote up the Connect item.


Failover Cluster Manager Connection Error Fix

A few days ago I encountered a new error with Failover Cluster Manager.  A couple of servers had been rebuilt to upgrade them from Windows Server 2008 to 2012. They were added back to the cluster successfully. However, one of the servers would not open Failover Cluster Manager properly, and tracking down the solution took a long time.

The problem server successfully joined the cluster, but now it would not connect to the cluster using Failover Cluster Manager. If you opened up the application, it didn’t try to automatically connect, and manually connecting with the fully qualified name failed too. Below is the generated error.


I love how this error has absolutely no useful information to it. Luckily I was able to track Error 0x80010002 down online.

Research indicated that there was some sort of WMI error on the computer. Rebooting didn’t help anything, and after numerous attempts to correct/rebuild the WMI repository, not much was accomplished. Eventually, the server could connect to the cluster, but that only worked about 30% of the time, and it nearly timed out even when it did succeed! The cluster still never connected automatically.

After further poking around on the internet, I found a few suggested solutions, with my ultimate fix closely following this post. I still had to combine everything together and run scripts all over the cluster before things returned to normal.

First of all, this is a condensed version of the Cluster Query from the TechNet post linked above.

1) Cluster Query

$Nodes = Get-ClusterNode
ForEach ($Node in $Nodes)
 If($Node.State -eq "Down")
  { Write-Host "$Node : Node down skipping" }
   $Result = (Get-WmiObject -Class "MSCluster_CLUSTER" -NameSpace "root\MSCluster" -Authentication PacketPrivacy -ComputerName $Node -ErrorAction Stop).__SERVER
   Write-Host -ForegroundColor Green "$Node : WMI query succeeded"
   Write-Host -ForegroundColor Red "$Node : WMI Query failed" -NoNewline
   Write-Host  " //"$_.Exception.Message

Any server that throws an error with the above query needs to have the following scripts ran on it:

2) MOF Parser
This will parse data for the cluster file. 

cd c:\windows\system32\wbem
mofcomp.exe cluswmi.mof

FCM was still not working correctly, so I reset WMI with the following command.

3) Reset WMI Repository

Winmgmt /resetrepository

That will restart the WMI service, so you’ll probably have to try running it multiple times until all the dependent services are stopped. The command shouldn’t take more than a few seconds to process either way though.

After that, the server that failed the Cluster Query (1) was reporting good connections, but FCM still wouldn’t open properly!

I decided to try the two WMI commands (2 & 3) again on the original server that couldn’t connect to FCM. I had already ran those commands there during the initial troubleshooting, so I was starting to think this was a dead end. Still, it couldn’t hurt, so I gave it a shot.

I reopened FCM and voila! Now the cluster was automatically connecting and looking normal.

As a further note, after everything appeared to be working correctly, SQL was having trouble validating connections to each node in the cluster during install, and I had to run commands 2 & 3 on yet another node in the cluster before things worked 100%, even though that node never had a connection error using the Cluster Query (1).

SQL Server Storage: Reading Block Size

This is the final post in the SQL Server Storage line of blog posts I’ve made. First we discussed Pages & Extents, then how to Read A Page using T-SQL, and finally this post will be about Disk Partition Offsets and Allocation.

Knowing that Pages are 8 KB and Extents are 64 KB, it’s understandable that SQL would have the best performance when the disks are aligned in the same manner. However, in earlier versions of Windows, this was never the case. Instead, Windows used an alignment configuration of 63 512 Byte sectors for a total of 31.5 KB. Since the way SQL Server reads and writes was not aligned with Windows, havoc ensued in the form of increased I/O operations. As of Windows 2008, partitions are aligned at 1024 KB by default, providing fewer I/O operations since 1024 is a multiple of 64.

I tried to keep the above explanation simple, but it still got wordy. The takeaway should be: you want SQL Server disks to have a Starting Offset that is a multiple of 64 KB, with the desirable default being 1024 KB. Higher values are fine for special circumstances, just so long as your Starting Offset is evenly divisible by 64.
As for your block size, that should also be 64 KB, or the size of a single Extent, to improve performance.

There is a single script to find both Block Size and Starting Offset, and it almost even works!

Get-WmiObject win32_DiskPartition |
	SELECT SystemName, Name, BlockSize, StartingOffSet |
	FT -Auto

That script returns the correct Starting Offset, but that Block Size is a screwy number that should not be trusted. To make matters worse, you get a Partition Number, but no Drive Letter. That doesn’t help me understand if my data disk is formatted correctly. In comes another query.

Get-WmiObject -Class win32_Volume |
    FT DriveLetter, Label, BlockSize -Auto

Great, now we have an accurate BlockSize and a Drive Letter! The only problem is, now there is no Partition Number to match up with the OffSet from the first query. Getting ALL the information at one time is a pain. Luckily there are Custom Objects to help us smash results together and Win32_LogicalDisk to help us relate the drives and partitions.

$Report = @()
$Disks = Get-WMIObject Win32_logicaldisk | WHERE { $_.DriveType -eq '3' }
$Result = ForEach ( $Disk in $Disks)
    $OffSet = Get-WmiObject -Query "Associators of {Win32_LogicalDisk.DeviceID='$($Disk.DeviceID)'} WHERE ResultRole=Antecedent"
    $Report = [PSCustomObject] @{
		ComputerName = $Disk.SystemName
		DriveLetter = $Disk.DeviceID
		Partition = $OffSet.Name
		BlockSize = ( Get-WmiObject Win32_Volume | WHERE {$_.DriveLetter -eq "$($Disk.DeviceID"} | SELECT -expand BlockSize )
		StartingOffSet = $OffSet.StartingOffSet
$Result | FT -Auto

Now you can determine the Block Size and Starting Offset of your drives easily. Changing those values, well that’s another story. Normally that will require formatting the drive, so its best to get it right in the first place. Make sure all your SQL Server drives are formatted correctly before you get started. Just educate your friendly Storage Admin on what SQL Server needs so everything is correct by the time you get it.

SQL Server Storage: Reading Pages with T-SQL

This is a continuation post from last weeks’ SQL Server Storage: Pages and Extents explanation. Since the description was a bit longer than I originally expected it to be, I decided to split the scripts into more posts.

First, I’m going to quickly cover how you can see information about a specific page using T-SQL. Now, normally this won’t be of much use to you, but it’s fun to play around with a bit just to see how things are actually stored. You might have need to read a page during some heavy troubleshooting at some point in the future too.  This procedure is undocumented though, so information is scarce and the feature could disappear without notice in future versions.

To read a page, you’ll need to utilize DBCC PAGE which I’ve listed the basic layout for below.

  'DbName' OR DbId -- Database name or Database ID, either one!
  ,FileNumber -- File Number of the Page
  ,PageNumber -- Page Number in the File
  ,PrintOption -- Display option ranging from 0-3 with different info from each

Now we need to know what FileNumber and PageNumber to supply to DBCC PAGE though. Random numbers might work, but if you are actually trying to do anything halfway useful, they won’t get you far. To solve this problem, we have to utilize another procedure…DBCC IND

  'DbName' OR DbId -- Database name or Database ID, either one!
  ,TableName -- Table Name...I don't think this really needs a comment
  ,IndexId -- index_id from sys.indexes; -1 = indexes and IAMs, -2 = IAMs

Using DBCC IND we can get some relevant data to pass into DBCC PAGE. The problem is, you still need a relevant Index_ID  for the selected table. The following query can help with that. Just supply the desired TableName in the WHERE clause.

--Get the index_id relating to your desired TableName to pass into DBCC IND
SELECT * FROM sys.indexes
WHERE OBJECT_NAME(object_id) = 'TableName'


A sample result of sys.indexes. Grab the index_id and plug that into DBCC IND


With an index_id and a TableName in mind, we can get some results from DBCC IND.


If you are just testing, index_id = 1 is not a bad idea to check.

Awesome, now we have meaningful ids to use with DBCC PAGE. You’ll need to do one more thing before you run it though. Trace flag 3604 has to be set for SQL to provide output. Without it, you won’t get any results at all.


No Trace Flag means no results


With the Trace Flag on, you’ll get more data than you probably know what to do with.


DBCC Page provides LOTS of information, I only captured a tiny snippet for your visual delight

Now you know how to get page information about your tables, so long as these undocumented procedures are available. Since this post again got longer than I originally expected, I’ll cover reading and setting block sizes on your disks next week. That will involve some fun PowerShell too!

SQL Server Storage: Pages and Extents

It’s time for another SQL Server refresher today! This time we will discuss some storage basics, specifically Pages and Extents and how they relate to each other. There are a lot of resources out there discussing these storage units, but I’ve tried to put my own spin on things and aggregate as much data as I could find about them.


Naturally we are going to discuss pages first, since they are the most fundamental unit of storage for SQL Server. Pages store everything in the database and are only 8 KB small. Your entire disk I/O is performed at the page level.


An example data page layout is easier to visualize than explain

Pages have three major components, a page header, records, and the offset array. Pages start with a 96 byte header which contains meta-data, like the page number, owner’s object id, and page type. Pages end with the offset array which is 36 bytes and has pointers to each new row stored in the page.  These pointers are stored last to first, but that’s more easily explained in the picture. The offset array is essentially the index for the page. The middle of the page is the records, and consists of the remaining 8060 bytes containing stored data.

There are different types of pages, such as data, index, image, and a number of informational pages. Probably the most interesting type are overflow pages. If a row is greater than 8060 bytes, the data can be stored on overflow pages which are linked together. Overflow pages can store as much as 2GB in a single column, but obviously this is less than ideal. The performance impact increases since each extra page increases read times. The most obvious example of this situation is VARCHAR(MAX) or VARBINARY(MAX) datatypes. Data type limitations normally relate directly to the size of a page, (MAX) datatypes effectively bypass the limit and cause overflow pages. For instance, VARCHAR(8000) and NVARCHAR(4000) are the normal limits and based on the size of a single page. Using (MAX) datatypes that span multiple pages increases reads causing less than stellar performance. Queries like SELECT * can grab poorly performing datatypes accidently and should be avoided as much as possible.



Extents can be Uniform and have all the same type of pages or Mixed with a combination of page types

Simply put, extents are groups of pages. Extents consist of exactly eight contiguous pages, with every page being part of an extent. In other words, pages always come in groups of eight, so data grows in a minimum of 64 KB increments. Unlike the many types of pages, there are only two types of extents.

Mixed Extents: In these extents, pages are allocated to multiple objects, or different types of pages. New tables or indexes are put into mixed extents for storage efficiency. When a small table is made that would consist of less than eight pages, it gets stored in a mixed extent with other similarly small objects. If a database grows large enough to fill an entire extent, it can utilize the next type of extent.

Uniform Extents: These extents have pages that are all allocated to the same object. Larger databases often have extents with identical page types, such as data pages or index pages. More data can be read in a single read operation with uniform extents, so performance can see an improvement.

Originally I had planned to provide some example scripts to discover information about your pages, and storage setup, but in an effort to keep the information in byte-sized chunks, I’ll continue with that next week.

Remotely Enable Always On

Always On Availability Groups is the new feature for High Availability in SQL Server 2012. It’s been out for awhile now, but unless you have Enterprise Edition SQL, you might not have been able to use it much.

Of course you need a cluster to utilize Always On, but once that is complete, you also have to enable Always On in Configuration Manager on all your servers that will be participating in the AG as well.

Continuing on with my lazy, automated DBA goals of logging into computers as rarely as possible, I developed the below PowerShell script to connect to SQL Servers, enable Always On, and then restart the SQL Service in order for the changes to take effect.

The only thing you need to change below is the computer names, it should automatically detect your SQL instance names. If that doesn’t work (I haven’t been able to test every possible name parsing possibility), you can supply the instance names yourself.

## List Servers in AG Here ##
$Computers = 'Computer1','Computer2'
## Everything Else is Automated ##

# Finds the servers running the services, and the services' names
Invoke-Command -ComputerName $Computers -Scriptblock {
$Services = (Get-Service -Include MsSql* | Where { $_.Status -eq 'Running' } )
$Nodes = @()

# Parses the names of the SQL Instances
ForEach( $Service in $Services )
{ $Nodes += $Env:ComputerName'\'+$Service.DisplayName.Split('(')[1].Replace(')','') }

# Loops through each instance and enables AlwaysOn, restarting with -Force
ForEach ( $Node in $Nodes )
{ Enable-SQLAlwaysOn -ServerInstance $Node -Force }

# Starts SQL Service on the affected server(s) if it's still stopped
If($Services -ne $NULL)
{ Start-Service -DisplayName ($Services.DisplayName) }

The key code here for enabling Always On is this snippet below.

Enable-SQLAlwaysOn -ServerInstance $Node -Force

If the longer script cannot automatically detect your ServerInstance, you can provide it manually and run the command. Restart your SQL Service for the change to take affect.

SQL Server Changing Passwords and an SSPI Context Error

The other day I encountered a login error when connecting to a SQL Server. The circumstance seemed strange compared to similar errors described online with many of those seeming rather complicated to find the real solution. Since this server had been around for awhile, it was unlikely that some major Active Directory change would be necessary to resolve the issue.

This SQL Server was part of an Availability Group, and the connection worked fine when connecting using the Server/Instance name, however, when attempting to connect via the Listener, the following error occurred.

Cannot connect to Server/Instance.
The target principal name is incorrect.
Cannot generate SSPI context (Microsoft SQL Server)

Articles online indicated this was an SPN, Kerberos, and/or Active Directory issue, and something needed to be reset, but the only way to know for sure was to continue down a long troubleshooting list. Luckily, the problem was simpler than that, but still very strange.

I had reset the service account passwords the afternoon before this error became apparent. Each service was restarted afterwards to verify the change worked properly and SQL had been successfully connected to. Everything seemed fine from my perspective.

The next day, some users attempted to connect using the Listener and that’s when the errors started. I don’t normally connect via the Listener, so I hadn’t thought to check that, didn’t think it would be necessary.

Troubleshooting the easy solutions first seemed like a good idea, so I decided to try restarting the SQL service, which failed everything to another server in the cluster immediately. The services came online, and now both the instance and Listener could be connected to. OK, well probably sort of solved.

I failed it to a third node in the cluster, everything still worked great. Cool. This was looking even better.

Next I failed it back to the original node.  This time, the SQL Service came online, but not the Listener. Strange, how did it work in the first place? Everything was running on that server before I restarted the service, even if it wasn’t running correctly. I reset the passwords in SQL Configuration Manager, and then restarted the services. Everything worked perfectly now.

In summary, somehow all the services restarted on the server after the password change, but the Listener had a bad password and was not allowing connections. When I attempted to restart the Listener again, it failed until the password was corrected. I still don’t know how this happened, but it’s a good reminder to be especially careful when changing service passwords.  Changing passwords on a cluster can be even more dangerous since you have extra services to update that may not even be running on the server at the time, so verifying everything went smoothly can take a few extra steps.

Tales of when a Log Fails to Shrink in an Availability Group

I received a report that one of my servers had 7% free space on its log drive. Sounded like something fun to resolve. I checked on what was going on and found a log file that was 99% free and a hundred gb in size. While shrinking a log file is not a good practice and I’m not advocating it by any means because it’s just going to grow again and your storage is there specifically to hold logs, this situation was a out of the ordinary and we needed space.

The problem was, this log would not shrink. It was being extremely uncooperative. I took a backup, log backups, multiple shrink attempts, but it wouldn’t budge. The message returned was a big clue though.

<code>The log for database ‘dbname’ cannot be shrunk until all secondaries have moved past the point where the log was added.</code>

As you might have guessed, this server was a SQL Server 2012 instance and in an Always On Availability Group. The database in question could not shrink because it was participating in the AG.

It wasn’t an ideal fix, but by removing the database from the Availability Group, I was able to perform a log shrink to get the size to a more manageable amount. No, I did not truncate it to minimum size, I adjusted it to a reasonable amount based on its normal work. I didn’t want the log to just have to grow again. The shrink worked flawlessly, and with adequate drive space, I attempted to add the database back to the AG via the wizard.

The AG wizard refused to help. The database was encrypted and the AG wizard will not let you add a database if it is encrypted. No explanation why, it just doesn’t like that. You can add an encrypted database to an AG via script though. You can even script the change from the wizard by using a non-encrypted database then changing the database name in the scripted result. The resulting script is exactly what the AG wizard would do, it just cannot execute it automatically.


With free space and an encrypted database safely back in my AG, I was off to new adventures!

Recovery Model Comparison

This simple blog will cover the bulk of information you’ll need to make a fully informed decision on which Recovery Model to choose for your SQL Server databases. I know, that was horrible, but I couldn’t resist. Recovery Models are normally considered one of the more basic things to know about SQL Server, but refreshers are never a bad idea. Furthermore, if you are like me, you haven’t actually dealt with the Bulk Logged model much.

Simple Recovery Model

The simple model constantly reclaims log space in order to keep storage requirements minimal. Transactions are still written to the log; however, once a transaction completes and the data is written to the data file, the transaction log is truncated. This keeps the log small and allows the space to be reused.

While this model takes the least amount of planning and work to maintain, it also provides the least amount of protection. You cannot perform transaction log backups, so any changes since your last backup will be lost in the event of a disaster. There is no possibility to recover to a specific point in time either, and high availability solutions cannot be implemented. This model is best used for unimportant or test data. Data that rarely changes or would have no impact if data loss occurs between backups are also good choices for Simple recovery.

Supported Backups:

  • Full
  • Differential


  • Log shipping
  • Database Mirroring
  • Always On Availability Groups
  • Point in time Restore

Bulk Logged Recovery Model

This model performs minimal logging of bulk operations in order to reduce the size of the transaction log. This means that the model does support the use of transaction log backups. In fact, it’s essentially identical to the Full Recovery model except for omitting bulk operations from the log. Even so, you shouldn’t use this model long term, just go full.

Bulk operation examples:

  • Bulk Imports (BCP, Bulk Insert)

Since Point in Time recovery is not supported in this model, data can be lost, but only if the transaction log is damaged or bulk logged operations occur (since those aren’t logged). This recovery model is best used during index maintenance or when inserting massive amounts of data. Your transaction log won’t blow up in size, and you’ll be able to flip back to Full Recovery easily after your maintenance is complete (more on that later).

Supported Backups:

  • Full
  • Differential
  • Transaction Log

Full Recovery Model

With the Full Recovery model, you can prevent data being lost due to damage and you finally have the opportunity to restore to a specific point in time. All transactions are stored in the log until a log backup occurs or the log is truncated. If you fail to schedule regular log backups, your transaction log will grow till storage becomes a major problem. After a transaction completes, the data from the log is then written into the data file.

If the tail of the log is damaged, you can still have some data loss up to the last transaction log backup. This is the most appropriate recovery model to use for production data as its the only way to provide point in time recovery.

Supported Backups:

  • Full
  • Differential
  • Transaction Log

Changing Recovery Models

The command to flip recovery models is very easy, just issue an alter command on the desired database and set it to the desired model.

USE [Master];

Flipping between recovery models requires a few considerations, which I’ve summed below.

Type Notes
Full to Bulk Backup log for a last recovery point then switch to Bulk Logged Recovery; backup log again after the switch.

Backup Log -> Switch to Bulk-> Backup Log

Bulk to Full Switch back to Full Recovery after Bulk Logged operations complete; backup the log.

Switch to Full -> Backup Log

Simple to Full/Bulk Switch to Full or Bulk Recovery; perform a Full or Differential Backup to start the log chain. Schedule or reenable log backups. Change takes effect after first backup.

Switch to Full/Bulk -> Data Backup -> Backup Log

Bulk/Full to Simple Disable Log Backups, breaking the log chain. Switch to Simple Recovery; perform a Full or Differential Backup.

Disable Log Backups -> Switch to Simple -> Data Backup


SQL Server Troubleshooting: Token-based Login Failure

In continuation of the forced mirroring failover procedure I posted last week, this post describes the another level of pain you may encounter.

After forcibly failing a mirroring session to the secondary server, users were unable to connect to the SQL Server. The SQL Error Log was full of the same error message.


Login failed for user


Our SQL Server uses Windows Based Authentication so that was a major hint. The solution was actually incredibly easy. Originally I assumed that an account was locked out or perhaps missing from the mirror server – who knew how long ago everything had been correctly synched.

There are two likely solutions to this issue.

UAC is blocking the connection attempts

I find this to be a less likely scenario, and since changing UAC settings require a server restart, I highly suggest testing the next method first. It will probably solve your problem without restart. If the issue is UAC, the server is probably a recent setup, otherwise I feel you would have noticed this.

Recreate the User Account

More likely, the account in question needs to be dropped and recreated in SQL Server. If the user connects though SQL using a group or service account, you’ll need to track down the appropriate account that they are a member of in Active Directory. Ask your AD administrator to look up the groups which the user is a member of if you don’t have access yourself, or intuitively already know the answer.

Once you have the correct account, you’ll need to take note of the existing permissions via screenshot, scripting the login as a create to statement, or just remembering everything. Delete the existing account in the server-wide Security as well as Database-level Security. Deleting an account in server Security does not cascade to the Database level. In fact, you’ll be warned of this by SQL Server when you attempt the delete.

Now you’ll just need to recreate the account providing the correct permissions and any relevant mappings. The token-based server access validation errors should be solved now.