Get and Set Folder Permissions with PowerShell

Managing permissions for numerous servers is the theme today. Drilling down into a folder, right-clicking properties, then reviewing security on the same folder for potentially dozens of computers is time consuming and, with the capabilities of scripting, unnecessary.

PowerShell lets us do this very easily. The first script allows you to view each account and their corresponding read/write permissions on any number of computers. By default the script will only search the local computer. You can filter to only display a specific right. A full list and explanation of each right is available here.

Function Get-Permission
  [string[]]$ComputerName = $Env:COMPUTERNAME,
Process {
 $ComputerName |
 ForEach-Object {
 $Server = "$_"
 Write-Verbose "Getting Permissions for \\$Server\$Folder"
 (Get-Acl "\\$Server\$Folder").Access |
 Where { $_.FileSystemRights -LIKE "*$Rights*" } | Select IdentityReference, FileSystemRights, AccessControlType

Now for a simple example. Remember to supply a $ instead of a : after the drive letter, as this is designed to run remotely.

#Example of Get-Permission
Get-Permission -ComputerName "COMP1","COMP2" -Folder "C$\logs\SQL"
Now that you have verified the permissions list, you might need to make some adjustments. This set command will allow you to change $Access and $Rights for a specific $Account with minimal effort across your domain.
Function Set-Permission
  [string[]]$ComputerName = $env:COMPUTERNAME,
  [string]$Access = "Allow",
  [string]$Right = "FullControl"
Process {
  ForEach-Object {
  $Server = "$_"
  $Acl = Get-Acl "\\$Server\$Folder"
  $Rule = New-Object System.Security.AccessControl.FileSystemAccessRule("$Account","$Right","ContainerInherit,ObjectInherit","None","$Access")
  Set-Acl "\\$Server\$Folder" $Acl
  Write-Verbose "Permission Set for \\$Server\$Folder"
 And here is a quick example of how to execute the function. This can be used to allow or deny rights to the folder.
#Example Set-Permission
Set-Permission -ComputerName "Comp1","Comp2" -Folder "C$\logs\sql" -Account "Domain\ServiceUser" -Access "Allow" -Right "FullControl"

Recovery Model Comparison

This simple blog will cover the bulk of information you’ll need to make a fully informed decision on which Recovery Model to choose for your SQL Server databases. I know, that was horrible, but I couldn’t resist. Recovery Models are normally considered one of the more basic things to know about SQL Server, but refreshers are never a bad idea. Furthermore, if you are like me, you haven’t actually dealt with the Bulk Logged model much.

Simple Recovery Model

The simple model constantly reclaims log space in order to keep storage requirements minimal. Transactions are still written to the log; however, once a transaction completes and the data is written to the data file, the transaction log is truncated. This keeps the log small and allows the space to be reused.

While this model takes the least amount of planning and work to maintain, it also provides the least amount of protection. You cannot perform transaction log backups, so any changes since your last backup will be lost in the event of a disaster. There is no possibility to recover to a specific point in time either, and high availability solutions cannot be implemented. This model is best used for unimportant or test data. Data that rarely changes or would have no impact if data loss occurs between backups are also good choices for Simple recovery.

Supported Backups:

  • Full
  • Differential


  • Log shipping
  • Database Mirroring
  • Always On Availability Groups
  • Point in time Restore

Bulk Logged Recovery Model

This model performs minimal logging of bulk operations in order to reduce the size of the transaction log. This means that the model does support the use of transaction log backups. In fact, it’s essentially identical to the Full Recovery model except for omitting bulk operations from the log. Even so, you shouldn’t use this model long term, just go full.

Bulk operation examples:

  • Bulk Imports (BCP, Bulk Insert)

Since Point in Time recovery is not supported in this model, data can be lost, but only if the transaction log is damaged or bulk logged operations occur (since those aren’t logged). This recovery model is best used during index maintenance or when inserting massive amounts of data. Your transaction log won’t blow up in size, and you’ll be able to flip back to Full Recovery easily after your maintenance is complete (more on that later).

Supported Backups:

  • Full
  • Differential
  • Transaction Log

Full Recovery Model

With the Full Recovery model, you can prevent data being lost due to damage and you finally have the opportunity to restore to a specific point in time. All transactions are stored in the log until a log backup occurs or the log is truncated. If you fail to schedule regular log backups, your transaction log will grow till storage becomes a major problem. After a transaction completes, the data from the log is then written into the data file.

If the tail of the log is damaged, you can still have some data loss up to the last transaction log backup. This is the most appropriate recovery model to use for production data as its the only way to provide point in time recovery.

Supported Backups:

  • Full
  • Differential
  • Transaction Log

Changing Recovery Models

The command to flip recovery models is very easy, just issue an alter command on the desired database and set it to the desired model.

USE [Master];

Flipping between recovery models requires a few considerations, which I’ve summed below.

Type Notes
Full to Bulk Backup log for a last recovery point then switch to Bulk Logged Recovery; backup log again after the switch.

Backup Log -> Switch to Bulk-> Backup Log

Bulk to Full Switch back to Full Recovery after Bulk Logged operations complete; backup the log.

Switch to Full -> Backup Log

Simple to Full/Bulk Switch to Full or Bulk Recovery; perform a Full or Differential Backup to start the log chain. Schedule or reenable log backups. Change takes effect after first backup.

Switch to Full/Bulk -> Data Backup -> Backup Log

Bulk/Full to Simple Disable Log Backups, breaking the log chain. Switch to Simple Recovery; perform a Full or Differential Backup.

Disable Log Backups -> Switch to Simple -> Data Backup


SQL Server Troubleshooting: Token-based Login Failure

In continuation of the forced mirroring failover procedure I posted last week, this post describes the another level of pain you may encounter.

After forcibly failing a mirroring session to the secondary server, users were unable to connect to the SQL Server. The SQL Error Log was full of the same error message.


Login failed for user


Our SQL Server uses Windows Based Authentication so that was a major hint. The solution was actually incredibly easy. Originally I assumed that an account was locked out or perhaps missing from the mirror server – who knew how long ago everything had been correctly synched.

There are two likely solutions to this issue.

UAC is blocking the connection attempts

I find this to be a less likely scenario, and since changing UAC settings require a server restart, I highly suggest testing the next method first. It will probably solve your problem without restart. If the issue is UAC, the server is probably a recent setup, otherwise I feel you would have noticed this.

Recreate the User Account

More likely, the account in question needs to be dropped and recreated in SQL Server. If the user connects though SQL using a group or service account, you’ll need to track down the appropriate account that they are a member of in Active Directory. Ask your AD administrator to look up the groups which the user is a member of if you don’t have access yourself, or intuitively already know the answer.

Once you have the correct account, you’ll need to take note of the existing permissions via screenshot, scripting the login as a create to statement, or just remembering everything. Delete the existing account in the server-wide Security as well as Database-level Security. Deleting an account in server Security does not cascade to the Database level. In fact, you’ll be warned of this by SQL Server when you attempt the delete.

Now you’ll just need to recreate the account providing the correct permissions and any relevant mappings. The token-based server access validation errors should be solved now.

SQL Server Forced Mirror Failover

Failing a SQL Server Mirror over is normally an incredibly simple process. If you use the GUI, you can accomplish the failover in moments. However, it is not always a planned event. Yesterday we had just such an issue.

If your principal server goes down and you do not have a witness setup for automatic failovers, you will need to perform a forced manual failover on the mirror server. There is no way to do this via the GUI, scripts only. Opening the database properties on a mirror results in a greyed out pane of information and zero options.


Not very helpful when this is all you have to work with.

Forced Failover Method 1 (PREFERRED)

In a perfect world, you will be able to connect to the mirror database and quickly run the following script, substituting in your database name. This will force the mirror to become the principal, and voila, you are up and running again. If you use an alias in Active Directory, just make sure the alias is changed to point to the new server.


However, in the less perfect world we live in, you may receive the following error:

Cannot alter database because database is not in the correct state to become the principal

This error results for one of two reasons:

a. Full Safety Mode

If the database is not in Full Safety Mode, otherwise known as Synchronous commit, the failover will error. If your mirror database is in a remote location, it’s probably not in Full Safety. Asynchronous mirrors are High Performance for data transfers, but Low Performance when it comes to forced failovers. Try running the following to force the database into Full Safety mode, which will then allow the failover to occur.


However, this script will fail too, if the principal database is still available to SQL. It might still fail regardless. It did for me, so I had to resort to Method 2 below.

b. Principal Database can still be reached by Mirror

If your principal database is still accessible by the SQL mirror, but having too many connectivity issues for you or users to actually reach it, you may have to shut down the principal in order to run commands from the mirror database. This is a bit of a strange scenario, but if your principal cannot be connected to, you’ll need to failover somehow. In this situation, someone will have to make the call of what to do.

Note: If you failover using the forced manual failover script, you will be able to manually resume mirroring after the principal server becomes available again. All your mirroring settings will be saved for the database, making failback relatively painless.

Break Mirror Method 2 (LAST RESORT)

If Method 1 of forcing the failover does not work, even after trying to set the database to Full Safety Mode, you can run the following script and break the mirroring session and restore the mirror with recovery. This is the quick and dirty method.


If you destroy the mirroring session in this manner, you will not be able to resume mirroring. All the settings will be erased and you will have to recreate mirroring from scratch. You’ll have a usable database though, and if the principal died suddenly, you shouldn’t actually have much, if any, data loss.

PowerShell: Get Network Adapters with WMI

I thought I had already blogged about this topic, but evidently not. I was in the middle of finishing another post with the intention of linking back to this one, when I realized it did not actually exist.

The script I’m about to explain actually exists in PowerShell now, but only if you are using Windows 8 or Server 2012. I don’t have that option most of the time professionally, so I can’t use Get-NetAdapter or Get-NetIpAddress. Too bad, it would solve all some one of my problems. I’d still have to splice those two commands together to get all the results I want. The work-around in older systems requires combining two wmi classes instead.

The snippet below will give you all the properties each wmi query can return. I need to match IP Addresses with the name of the adapter. NetworkAdapter can get me the IP, but NetworkAdapterConfiguration has the name. After reviewing the properties of each command, you might say, “but NetworkAdapter does have the name property!” Sure it does, but what it actually returns is more like the product model information. I want the personalized name of the adapter.

Get-WmiObject win32_NetworkAdapter | GM -Membertype Property

Get-WmiObject win32_NetworkAdapterConfiguration | GM -Membertype Property

The script I have to run is moderately complex. First I’ll get the Adapter information and assign it to a variable. I’m excluding blank Connection Ids and Local Area Connections. I don’t care about default stuff. If you do, just drop the WHERE clause. I then perform a ForEach loop to get the name of each adapter. The IpAddress is matched using the Filter parameter and passing in each adapter’s Deviceid, the unique adapter id that is available in both commands. Without that embedded second wmi query for the IP with the id filter, any data returned couldn’t be trusted.

The results are saved to a string variable which creates a custom table. This is necessary to get the data from two different queries in a single, readable result set.

$Adapters = Get-WmiObject win32_networkadapter | WHERE {$_.NetConnectionId -NOTLIKE '' -AND $_.NetConnectionId -NOTLIKE 'Local Area Connection*' }

$Info = ForEach ( $Adapter in $Adapters )
 $Results = '' | SELECT ComputerName,AdapterName,MacAddress,IpAddress
 $Results.ComputerName = ($Adapter.SystemName)
 $Results.AdapterName = $Adapter.NetConnectionId
 $Results.IpAddress = (Get-WmiObject win32_networkadapterconfiguration -Filter "Index = $($Adapter.DeviceId)").IpAddress
 $Results.MacAddress = $Adapter.MacAddress
$Info | FT -auto

There are a lot of ways to display and format data with custom objects and tables, and I’ll discuss a few of those in my next post.

Troubleshooting Policys and xp_dirtree

Today we had a policy failure on a newly built SQL server. The policy verifies that every SQL server has a backup file stored on a remote directory. Simple enough, but in the past, this policy has been prone to permissions errors.

In case you would like to implement something like this on your system, the dumbed-down condition of the policy is below:

ExecuteSql('NUMERIC', '
--Get DBs on server
SELECT  @DBCount = COUNT(*) FROM sys.databases WHERE database_id <> 2
--Insert DirTree results into a table variable to return a numeric result
DECLARE @Check TABLE (SubDirectory VARCHAR(250), Depth BIT, [File] BIT)
INSERT  @Check
EXEC master.dbo.xp_dirtree @File,1,1
--Numeric result for easy policy evaluation 1=Pass if( # files = # dbs )
IF((SELECT COUNT(*) FROM @Check) = @DBCount)

Policy Explanation

The dumbed-down policy checks a backup server, which has a folder named for each SQL server, to see if it has the correct count of backups.  Each folder should have a backup file for each database, except for TempDB. This isn’t exactly how we do it, but it gets my point across and it provides a good example of how a moderately complex policy works using ExecuteSQL. There aren’t a lot of examples on the web, so I try to provide as many as I can.

The Troubleshooting

The new server wasn’t processing the policy correctly. My first idea was, “Well, there’s no files!” Easy! Problem solved!  Or not…the files were in the folder, and the number was even correct. So it wasn’t the backups or copy failing. That meant that there was something wrong with the permissions, the problem was, what?

I verified the logins were all present in SQL by reviewing Security -> Logins through Management Studio. This was a new server, so I assumed it would be something simple like that. The account in question was the SQL Server service account.  Policy Based Management and xp_dirtree use this account. The account was in SQL and it had all the proper permissions. OK, next thing.

The folder where all the files are stored is a shared folder, so I went into the Properties of the folder and verified the account that was shared. Everything looked right there. Every time I’ve had an issue with the permissions of xp_dirtree reading servers, it’s been because the service account in the shared folder was wrong, so my next stop was checking out Active Directory.


Sharing a folder makes the file path easier. Just make sure your service account has permission here.

Drilling down through Active Directory, I found the account used for sharing the folder. This account had every SQL service account as a member. It looked fine in AD, it looked fine on the Shared folder. Everything worked perfectly except on this new server. What was wrong?!?

At this point, I just decided to throw a hammer at it and throw in extra service accounts until hopefully one worked. Eventually I added the SQL Agent account. Suddenly, the policy passed. The new server could read the backup directory! So I took the normal account out of the permissions. Now everything failed except this new server. Wait, what…?

I logged into the new server and loaded up SQL Configuration Manager…


When the server was setup, someone switched the accounts. SQL Server was running using the SQL Agent service account. SQL Agent was running using the SQL Server service account. I restarted each with the correctly corresponding service and account. I then fixed the permissions for the shared backup directory back to the original setup.

Voila! Everything worked perfectly!

The Moral of the Story

Make sure new servers and settings are configured properly before trying to fix things that weren’t broken before the new stuff existed. I’m sad to say it took me so long to understand the problem. Even with other DBAs chiming in. None of us thought about checking the Configuration Manager…that or they assumed I had already checked that.

Sometimes, it’s the little things that get you…OK, normally its the little things that get you.

PowerShell: Detect and Uninstall Java FAST

I’ve been building out a remote, silent Java Updater for awhile now. Making sure it works perfectly in my environment is not an easy task. I’m 90% there, I just have to get over the last few hurdles.
One of the major problems was uninstalling Java efficiently. You can Google scripted Java uninstalls, and you’ll probably find the same recycled code over and over again. (I’ve provided it below, but don’t touch it) The problem is, this code is terrible.

Let me explain. The code works as intended. That’s not why it’s terrible. Why am I complaining then? It uses a very, very bad WMI class. What’s WMI you ask? Not the problem. The class is. It’s a huge waste of time. How bad is it?


When even Bing thinks it’s bad. It’s bad.


Get-WmiObject -Class win32_product would have been a very useful command, however, it’s by far the slowest and most inconsistent thing I have ever used.  Worse, you can’t fix it. Use a WHERE clause, use a FILTER, use wildcards, use explicit values. It doesn’t matter. It’s still slower than molasses. If you plan to use this command, you might as well just go to lunch after you hit execute.

If you didn’t check out those links above, let me summarize:

Invoking win32_product causes a computer to enumerate through the entire list of installed programs and validates each install, causing potentially extremely long wait times before returning results. Regardless of where clauses or filters, every program is always checked in this way.

In testing, the command takes 20-40 minutes to return results on a production server. Even on my computer it takes at least 5 minutes, normally much longer. I have ran this command and gotten results as fast as 11 seconds…after I ran it three times in quick succession. Then running it again five minutes later already took three minutes again. And that was the best case scenario test I had. That’s just unacceptable. Here’s that terrible code with a timer included. Take my word for it and just skip past this.

Win32_Product (Very Slow – Just Skip It!)

$Start = Get-Date
$Java = Get-WmiObject -Class win32_product | WHERE { $_.Name -LIKE "Java [0-9]*" }
$Current = (Get-Date) - $Start
"Execution Time: $($Current.Hours):$($Current.Minutes):$($Current.Seconds)"

If you ran that, I’m sorry you wasted away half your life waiting for it to complete, but I did warn you.

My Method (Fast but Wordy)

I came up with a method to uninstall through registry guids. Detecting the java installation guid goes from 30 minutes to <1 second in this manner. The entire installation process was cut by at least 98%, over half an hour down to roughly 25 seconds! I can work with that! The code is a lot longer though.

$Start = Get-Date
Function Get-Uninstaller
$ErrorActionPreference = "SilentlyContinue"
$Guids = Get-ChildItem -Path 'Registry::HKLM\Software\Microsoft\Windows\CurrentVersion\Uninstall' | ForEach {$_.Name}
$Products = ForEach ($Guid in $Guids)
$Results = "" | SELECT DisplayName, IdentifyingNumber
$Results.DisplayName = Get-ItemProperty -Path $($Guid.Replace("HKEY_LOCAL_MACHINE","Registry::HKLM")) | ForEach {$_.DisplayName}
$Results.IdentifyingNumber = Get-ItemProperty -Path $($Guid.Replace("HKEY_LOCAL_MACHINE","Registry::HKLM")) | ForEach {$_.UninstallString.Split("X")[1]}
}#EndFunction Get-Uninstaller

$Java = Get-Uninstaller | WHERE { $_.DisplayName -LIKE "Java [0-9]*" }
$Current = (Get-Date) - $Start
"Execution Time: $($Current.Hours):$($Current.Minutes):$($Current.Seconds)"

Win32Reg_AddRemovePrograms (Fast, but last minute discovery)

But wait, there’s more! The documentation on how broken win32_products is provided me with an even easier method. I didn’t find it until researching some links for this blog though. I was very skeptical about this code until I ran it, because I never saw anyone suggesting its use online while searching for a PowerShell uninstaller. It’s actually just as instantaneous as my registry read method!

########## GET INSTALLED JAVA VERSION - Alternative
$Start = Get-Date
$Java = Get-WmiObject -Class win32reg_addremoveprograms | WHERE { $_.DisplayName -LIKE "Java [0-9]*" }
$Current = (Get-Date) - $Start
"Execution Time: $($Current.Hours):$($Current.Minutes):$($Current.Seconds)"

Uninstall Script

Whichever method you choose, you’ll need to then run the uninstaller still. Just change the Guid Identifier appropriately for whichever script you choose. That is, $App.IdentifyingNumber or $App.ProdID. I’m logging to a Temp Java folder, so make sure that folder exists or just remove the /l and everything after it if you don’t care about the logging. The uninstall runs silently with no user interface and suppresses reboots.

ForEach ($App in $Java)
$ArgumentList = "/x $($App.IdentifyingNumber) /qn /norestart /l*vx C:\Temp\Java\UninstallLog.log"
Start-Process -FilePath "msiexec.exe" -ArgumentList $ArgumentList -Wait -PassThru


PowerShell: Get SQL Servers From AD

Today I’d like to share a quick script that is useful when working with a new domain of SQL servers. It’s also good for anyone who doesn’t know their environment very well or suspects there might be some hidden or rarely used servers. There are probably many ways to do this scattered around the internet, but this method is easy and as long as you have a properly designed Active Directory, you’ll be good to go.

The script isn’t anything special, but it will require the Active Directory PowerShell module. You’ll need to run the script on an existing terminal server or at least be able to successfully import the module. Importing a module is very straightforward and this one should already be available on a terminal server.

Import-Module ActiveDirectory

Once you have the module loaded, you can run the following script.

Get-ADComputer -Filter { Description -LIKE "*sql*" } -Properties Description | SELECT Name, Description

Let’s break this down since this is a short post. First of all, we are running the Get-ADComputer cmdlet. That’s going to search through your Active Directory for a list of computer names. The -Filter parameter is looking for any Description that contains the word SQL. If you want to instead search for a name prefix, you can change the filter to { Name -LIKE "Prefix*" }. The -Properties Description parameter will populate the description of the computers from AD. Without that piece, your descriptions will be blank even though we are specifically selecting them. Finally, we pipe the results to a SELECT since I’m only interested in the name of the computers and their descriptions.

This script relies on your Active Directory having useful descriptions. If your coworkers have been lazy and left those blank or put in less than useful information…it won’t be very effective.

PowerShell: Get Adapter Binding Order

In my environment, I have to set binding orders on all new sql servers, and then ensure that those binding orders don’t magically change (trust me, it happens, I just don’t know why). If you aren’t sure why adapter binding orders matter, Microsoft provides a short explanation and walkthrough on manually setting them, “Windows attempts to communicate with services by using network protocols in the order specified in Network Connections. To make network communications faster, you can reorder the protocols in this list from most used to least used.”

Set Bind Order Manually
Manually, you can get to your adapters via the Control Panel -> Network & Sharing Center -> Change Adapter Settings. At the adapter list, you then hit the super secret key Alt. This will display the otherwise hidden toolbar at the top of the explorer window. Select the Advanced Menu -> Advanced Settings. The top of that dialogue box will display the adapter binding order in descending order of priority.


Alt displays the toolbar like magic. I was so excited when I learned that.

Script It!
My script is a function that lists the binding order by each adapter’s friendly name. Displaying the name rather than the model information requires a few combined commands that reference each adapter by GUID. If you have ever tried to grab adapter names, you’ll understand why I saved a function. It’s a pain.

I save all the adapters in bind order into an array and then return that value. The idea is simple, it just takes some variable manipulation to get there.

Function Get-BindOrder
  $Binding = (Get-ItemProperty "HKLM:\SYSTEM\CurrentControlSet\Services\Tcpip\Linkage").Bind
  $Return = New-Object PSobject
  $BindingOrder = @()
 ForEach ($Bind in $Binding)
  $DeviceId = $Bind.Split("\")[2]
  $Adapter = (Get-WmiObject Win32_Networkadapter | Where {$_.GUID -eq $DeviceId }).NetConnectionId
  $BindingOrder += $Adapter

Remote & Multi-Server Calls
If you need information for your entire network, you can quickly wrap the function in our familiar friend, Invoke-Command.

$Computers = "YourListHere"
Invoke-Command -ComputerName $Computers -ScriptBlock {
    ## Paste Function Script Here ##

One method of setting the Binding Order via PowerShell requires hitting a few places in the registry and can be accomplished using Set-ItemProperty. It’s a bit of a pain though, so that’s a story for another day.

Collection Set Failure Troubleshooting

My shop uses a lot of monitoring. In fact, just about every monitoring feature built into SQL Server we have tried, are using, or have plans to use in the near future. Some of the monitoring overlaps in purpose, and some of it steps on each others’ toes. We get useful data, but we’ve also discovered a lot of unexpected errors. One such issue was uncovered this month when we were updating servers to a new cumulative update.

A little background first
We use Collection Sets. A more obscure feature of SQL Server that you may have never even heard of. I admit, I did not remember that it existed before one of my colleagues suggested trying it. Collection sets essentially provide data audits and have helped us track down some anomalies in the past. The collection sets run nearly constant queries across all of our servers, then report back to the CMS to catalog the data. We’ve altered the jobs a bit, most importantly offsetting the collection set’s upload schedules to avoid latency and blocking issues.

Well, after installing a Cumulative Update this month, all of the collection sets broke. Failures started flowing in stating that the data uploads were not processing and our inboxes exploded. We had to figure out what the problem was and then fix over a hundred servers.

Error Message
…Component name: DFT – Upload collection snapshot, Code: -1073450901, Subcomponent: SSIS.Pipeline, Description: “RFS – Read Current Upload Data” failed validation and returned validation status…

The first step was to determine how to get the collection set jobs working correctly again. We had to stop each collection set, then stop any collection upload jobs that were still running.


That alone wasn’t enough though, we had to then clear the cache files as well. These are stored in a folder on each server running the collection sets. You can find your cache location in the Object Explorer under Management -> Data Collection -> Right-click Properties.


Deleting the cache seemed unnecessary to me at first, but when I tried skipping it, the collection still failed after restart.
We had to fix a lot of servers, and it would have taken hours to do this manually. So I scripted all the changes. I needed it done quickly because I was planning to get Korean food with a friend on the day of the break.

You can do this all through PowerShell using Invoke-SqlCmd, but I find that process is very slow to make an initial connection, limits feedback, and I just needed this done fast.

-- SQL
-- Stop Collection Sets
EXEC msdb.dbo.sp_syscollector_stop_collection_set @name = 'Query Statistics'
EXEC msdb.dbo.sp_syscollector_stop_collection_set @name = 'Server Activity'

-- Stop Collection Upload jobs - they keep running after collection stops
-- Change job names as necessary
EXEC msdb.dbo.sp_stop_job @job_name = 'collection_set_3_upload'
EXEC msdb.dbo.sp_stop_job @job_name = 'collection_set_2_upload'
# Create file with computer names to connect to, or supply names with commas
$Computers = (Get-Content C:\Users\$Env:UserName\Desktop\Computers.txt)
# Provide path where you save your cache files
$Cache = "D$\Cache\CollectionSet"

$StartTime = Get-Date
ForEach ($Computer in $Computers)
$BeginTime = Get-Date
(Get-ChildItem "\\$Computer\$Cache" -Recurse) | Remove-Item -Force
"$Computer complete in $(((Get-Date) - $BeginTime).TotalSeconds) seconds."
"Total Runtime: $(((Get-Date) - $StartTime).TotalSeconds) seconds."
-- SQL
-- Start Collection Sets
EXEC msdb.dbo.sp_syscollector_start_collection_set @name = 'Query Statistics'
EXEC msdb.dbo.sp_syscollector_start_collection_set @name = 'Server Activity'

The PowerShell code should run quickly as it will delete files from more than one server at a time. I included the computer name and run time  as feedback so you don’t sit wondering how far along the script is. I had no feedback in my original script, so I sat fretting over how long it would take and how long I’d have to wait to go to lunch.