Feb 122021

Update Feb 11, 2020

In my First Blog post here Part1 I showed you how to set up Windows Deduplication on Server 2016. This second series will show you my production results and what I gained in space-saving

I estimated 18.5% in savings in the first post. However, I got 23% back, I think that is good in my opinion

I enabled Windows Deduplication on my production File Server Part1

Here is the drive space before, windows deduplication was running.

Ever so often I would run get-dedupstatus to give me an idea of what I am getting back

After Deduplication was run on 4TB Drive. As you can see I got back some good space

Now I wanted to take it one step further, and shrink the VHDXs. (Running it manually this time around)


I got back just a tad over 1 TB and some change overall. I take that as a win for now. Soon very soon, I will be moving a lot of this to Nutanix files, then we can see the difference if any 😊 of Those who use Nutanix AHV and Nutanix Files understand the awesomeness it offers. 

Nex day Checks on February 12,2021

Let’s check on the Environment, and confirm logins are not slower, and that the schedule is set incorrect nothing is running during the day.

My schedule is at 10pm = Check

I can confirm it’s not running and when the last time it ran is = Check

Monitoring the event log

“Monitoring the event log can also be helpful to understand deduplication events and status. To view deduplication events, in File Explorer, navigate to Applications and Services Logs, click Microsoft, click Windows, and then click Deduplication.”

“If the value LastOptimizationResult = 0x00000000 appears in the Get-DedupStatus |fl Windows PowerShell results, the entire dataset was processed by the previous optimization job”

Start time

Finished time

Citrix VAD (Fslogix) Login times. 1.6 seconds to attached the FSLogix profile.

Checking in Control up and things seem to be as they should before this was enabled. If you don’t have Control up, you can use Director. You can use the script as well to achieve some login measurements with just powershell


Analyze logon duration script just got more powerful | Citrix Blogs

As you can see, these are just some built in tools you can use. There is really no reason why you should not have this setup. Always test things and make sure you understand what it is that it’s doing, so you don’t break things



Monitor and Report for Data Deduplication | Microsoft Docs

Cleanup Jobs

Cleanup unused FSLogix Office 365 Containers – dready’s Blog

Delete Inactive FSLogix Profiles using PowerShell – hallspalmer_Blog (wordpress.com)

Delete old Profiles script

GitHub – FSLogix/Invoke-FslShrinkDisk: This script will shrink a FSLogix Disk to its minimum possible size

Dec 222020

I was recently tasked with moving some PVS target/vDisk to a new PVS environment. There are many ways to achieve this. However, this is how I migrate devices from one PVS Farm to another. I am sure there are better ways to automate this as my blog is just a simple way to get it done. Migrating PVS targets to a new PVS Server. Another way I have done it in the past was add new PVS servers into an existing Farm, then go through the actions to make them the main servers, and of course decom the old ones. This is a very simple task, and most people know this. But if not, they will help.

Open your PVS Console, and select the PVS vDisk, then click export.

Select the lowest version to grab the vDisk Chain

You will see a Filename.xml get created. This is a manifest file that will instruct the new destination PVS server to inject the version in.

For me I like to create Stores for all my different vDisk.

Give it a name

Check all the PVS Server that will contain the vDisk

Create the folder in the given location that you are storing the vDisk and Validate the paths.

I leave the default write cache location in the area it selects, as we do use the old legacy ways of the Write cache to server.

Now I just copy the Connects form the original Server to the destination

At this point I open of the PVS console and go to vDisk pool

Right click and Import

Select the store you create earlier, and Select a PVS server that you copied it to, and click search

Or you can import it from the Store, so you don’t have to select anything and take the change of getting something wrong.

Click Add and close

Click ok, after vDisk was added

Now you can see the Vdisk

For me I use a dedicated VMware device for updating my master images and such. I am going to move the VMware device over to the new PVS server and update the subnet and MAC as well to reflect a new subnet we just added.


Change the Port group and note the MAC

Now at this point I manually create the Device in the new PVS Server.

Add the new MAC, Update the vDIsks area to the vDisk you imported in.

Change the Type to Maintenance

Now because I use the ISO BDM method, I need to update it to the new BDM to the new PVS Boot path.

If you have more than one new PVS server, make sure you copy the vDisk files to the second server as well, or when it boots up based on the default load balancing, it will hit the second server and tell you no vDisk found.

Check and make sure the replication status is green on both servers

On the device, you will need to reset the Machine account from PVS, to recreate the Domain trust password

Now, boot the Maintenance Machine up and it should stream to the new location. Found the vDISK

Typically for moving all the targets I would do something like this, then I would edit the MAC addresses so they would align with the new subnet that I change from. 

How to Migrate PVS Target Devices | | Apps, Desktops, and Virtualization (kylewise.net)

For this situation, I can now create all new devices that will have the same vDisk. I will create a new Machine Catalog/Deliver Group. You can also use the same one if needed. I use FSlogix, so the profile data will be intact for the user, the user will not even notice anything has changed as far as the devices name or its moved to a different PVS server.

Oct 182019

Update 10/29/19: Added search (username/userfullname) to the Sessions tab.

Studio always seems extremely sluggish to me when trying to navigate between different areas. I’m always waiting a few seconds for the refresh circle to go away every single time I navigate somewhere new.

Most of the time I just need to put a machine in/out of maintenance, perform a power action, or logoff/disconnect a session. I found that power shell is MUCH faster, so I wrote a new little tool.

.net 4.5
The broker admin powershell snapin (available on the install media -alternatively you can run this on the delivery controller itself and use localhost as the connection string)

You can select each server/desktop/session individually, or select multiple. Then simply right click and select the operation to perform. I have tested this on versions 7.15 through 1906. Note: There is no warning! When you select the action to perform it will just do it.
Here is the download link

Operations allowed (depending on state):
Machines – turn on/off maintenance, shutdown, restart, reset, poweroff
Sessions – Disconnect, Log Off

Mar 132018

——–UPDATE 3/28/2018 ———-

Starting with FSLogix 2.8.12 you no longer need to do this.  Great work FSLogix!

From their release notes:

“• An issue with first time OneDrive installation has been resolved. The issue occurred when OneDrive was installed for the first time, and OneDrive syncing began. The FSLogix agent would not redirect the OneDrive files to the VHD/X container until the user logged off and then back on again. The FSLogix agent has been enhanced to detect that OneDrive is being installed, and now immediately begins redirecting the OneDrive files to the VHD/X.”

I have tested this multiple times and it works perfectly.

However, I DO still recommend you make the symlink to C:\OneDriveTemp as stated below.

Old post below


We have started to use FSLogix Apps to deliver OneDrive into our Hosted XenApp Desktops that we offer to our customers. This is a great tool that allows users to use the native OneDrive utility in their XenApp sessions. Why would you need this, you may ask? The reason is that OneDrive forces your OneDrive data to be saved in %LOCALAPPDATA%. If you’re using Citrix Profile Manager (UPM) or really any other profile tool, you would be stuck trying to sync that data during logon/logout, which isn’t really feasible. You could also just use local profiles, but then you would be saving all of the data across all of your XenApp servers. I have not figured out a way to trick OneDrive to save to a network location. It was even too smart for a Windows symlink.

This is where FSLogix Apps come in. FSlogix Apps can mount a VHD for each user, and with it’s file system filter driver it’s able to determine what data is destined for OneDrive and move that data into the VHD. At this point there is only one copy of the OneDrive data, and you can roam it across all XenApp servers in your environment. It works very well and we like it a lot, however there are a couple of important shortcoming I need to discuss. I will also show you how I worked around these issues with my own testing and scripting.


First: FSLogix Apps can not pre-mount junction points. If you look at a user who has logged in before configuring OneDrive, you will see the following information:

One, the VHD is mounted fine.

However, look at the redirects:

You can see here that only the Outlook and Lync redirects are in place. I don’t see anything for OneDrive. Let’s see what happens when we configure OneDrive right now. Let’s look at the space on C:

So, look at OneDrive and how much space it thinks is free. The VHD configured for all users is 100GB.

I’m going to select 5.5 GB of “crap” and start a sync. Let’s see what happens.

So, it has taken up space on the C:. Well this isn’t good! With a couple users and a small amount of data you can fill up the C: on your XenApp server. Bad! Also, in this type of configuration where you are only using FSLogix Apps for OneDrive and NOT for profiles, you would be syncing this information back with your profile management solution.

So… What can we do? A couple of things to help this out. I wrote a script to pre-mount the junction points. Let’s take a look.

function Register-Redirect {
[string] $TenantName = "VC3, Inc" # Enter your Tenant ID here
try {
# Ensure frx.exe is present in the PATH
if (!([System.Environment]::GetEnvironmentVariable('PATH').Split(';') -contains "$($env:ProgramFiles)\fslogix\apps")) {
Write-Verbose "frx.exe is not found in 'PATH'...Adding frx.exe to 'PATH'";
$env:PATH = "$($env:PATH);$($env:ProgramFiles)\fslogix\apps";
# If the junction point is not mapped parse out the number from the results returned from 'frx list-redirects file'
[string] $frxOutput = & 'frx' 'list-redirects' | Where-Object { $_ -match "\\$($env:USERNAME)\\" } | Out-String;
if ($frxOutput -match '\\Microsoft\\OneDrive') {
} else {
$Matches = $null; # Reset the hashtable.
$frxOutput -match '\\Device\\HarddiskVolume([1-9]{1,3})\\Skype4B\\User';
$volume = $Matches[1];
# Map the junction points matching the above regex.
Write-Verbose -Message "Adding redirect => Source: \Device\HarddiskVolume2\Users\$($env:USERNAME)\OneDrive - $($TenantName) Target: \Device\HarddiskVolume$($volume)\OneDrive\User";
frx add-redirect -src "\Device\HarddiskVolume2\Users\$($env:USERNAME)\OneDrive - $($TenantName)" -dest "\Device\HarddiskVolume$($volume)\OneDrive\User";
Write-Verbose -Message "Adding redirect => Source: \Device\HarddiskVolume2\Users\$($env:USERNAME)\AppData\Local\Microsoft\OneDrive Target: \Device\HarddiskVolume$($volume)\OneDrive\UserMeta";
frx add-redirect -src "\Device\HarddiskVolume2\Users\$($env:USERNAME)\AppData\Local\Microsoft\OneDrive" -dest "\Device\HarddiskVolume$($volume)\OneDrive\UserMeta";
} catch {
throw "Unable to add redirect";

sleep 5

& "C:\Program Files (x86)\Microsoft OneDrive\OneDriveSetup.exe"

All of this assumes your C: of your XenApp Server is HardDiskVolume2!!!

What does this do? Ok, first, you put your tenant ID in the top. You need this because your folder is normally in the “OneDrive – TenantID” format. Next, it sets a path to the frx .exe. Then it checks to see if the junction points are already mounted with `frx list-redirects` . (it will have no problem duplicating them, so I had to write this in). It uses logic to only pull your own username, so if you are logged into a XenApp server with a bunch of users, you only return your own information. By default it will list all users.

If the script see’s that you don’t have them mounted it will mount them for you. This is where it gets tricky. Look at the existing `frx list-redirects`

The section on the right is where it mounts the information into the VHD. Notice it is HardDiskVolume4. What I had to do was (get a bunch of help, BobFrankly/Atum – CitrixIRC) write a regex to extract the “4” from this output, so we can use it to mount the OneDrive locations in the same volume. THIS IS CRITICAL. Otherwise you wouldn’t be writing to the VHD and that would be bad. So the script extracts the volume number, then runs `frx add-redirect` to add the junction points.

This is a function, so the last thing it does is run the function, then launches the OneDriveSetup.exe

Let’s run this and see what it does. I added a pause so we could see it. You can see the two “Operation completed successfully!” messages, and the OneDrive Setup has launched.

On the server you see the junction points. Notice the addition of two OneDrive folders.

So, let’s setup OneDrive and see what happens. First, let’s look at the hard drive space again.

But wait, why does it still show the wrong amount of hard drive space? Bear with me.

Let’s sync 5GB of “crap” and see what happens. It shows the proper VHD size after the sync is complete. I’m not sure WHY it shows up incorrectly first, it just does.

Let’s check out the hard drive.


YOU SHOULD ONLY NEED TO RUN THIS SCRIPT ONCE! Once this process is done, FSLogix will handle the rest and you won’t have to do this ever again! I have this setup to launch through a shortcut in the user’s Start Menu’s as part of the “OneDrive Onboarding” process.

Second: If you have worked with OneDrive before, it’s great, but not perfect. Also, FSLogix doesn’t always perfectly clean up all of the junction points at logoff. You want to make sure they are gone at logoff, especially if something breaks during initial configuration of OneDrive. I have added a logoff script to kill the junction points. You will have to edit this for your own tenant ID!

frx del-redirect -src "\Device\HarddiskVolume2\Users\%USERNAME%\OneDrive - VC3, Inc"
frx del-redirect -src "\Device\HarddiskVolume2\Users\%USERNAME%\AppData\Local\Microsoft\OneDrive"

Lastly, if you know how the behavior of OneDrive is, you can still have a problem. By default, when you sync with OneDrive, the FIRST place it writes files to is C:\OneDriveTemp\<USERSID>. After it’s done processing it moves it into the OneDrive file location. It does this on a file by file basis, but again, if you had a bunch of users all sync’ing at the same time, you still could possibly fill up the C: on the XenApp server!

Lastly: The final uber-hack I did for this was to create a symlink on the server to point C:\OneDriveTemp to a network location. This one actually works with OneDrive. In my case I pointed it to a share I created on the same volume I was pointing the FSLogix VHDs to.

mklink /d C:\OneDriveTemp <SomePathHere>

That’s all I have. These are the step’s I had to go through to use FSLogix Apps with OneDrive in our production environment. Have fun!

Stay tuned. In a future post I will show you how to setup QOS for OneDrive so you don’t kill your datacenter’s bandwidth when you have a bunch of people uploading files at the same time.

Feb 212018

I plagiarized David Ott’s script for migration of Citrix Profile Manager (UPM) profiles to FSLogix and created it for Local Profiles.

NOTE: This only works between like profile versions.  eg. You can’t migrate your 2008R2 profiles to Server 2016 and expect it to work.  See this chart.

This requires using frx.exe, which means that FSLogix needs to be installed on the server that contains the profiles. The script will create the folders in the USERNAME_SID format, and set all proper permissions.

Use this script. Edit it. Run it (as administrator) from the Citrix server. It will pop up this screen to select what profiles to migrate.

#### EDIT ME
$newprofilepath = "\\domain.com\share\path"

#### Don't edit me
$ENV:PATH=”$ENV:PATH;C:\Program Files\fslogix\apps\”
$oldprofiles = gci c:\users | ?{$_.psiscontainer -eq $true} | select -Expand fullname | sort | out-gridview -OutputMode Multiple -title "Select profile(s) to convert"

# foreach old profile
foreach ($old in $oldprofiles) {

$sam = ($old | split-path -leaf)
$sid = (New-Object System.Security.Principal.NTAccount($sam)).translate([System.Security.Principal.SecurityIdentifier]).Value

# set the nfolder path to \\newprofilepath\username_sid
$nfolder = join-path $newprofilepath ($sam+"_"+$sid)
# if $nfolder doesn't exist - create it with permissions
if (!(test-path $nfolder)) {New-Item -Path $nfolder -ItemType directory | Out-Null}
& icacls $nfolder /setowner "$env:userdomain\$sam" /T /C
& icacls $nfolder /grant $env:userdomain\$sam`:`(OI`)`(CI`)F /T

# sets vhd to \\nfolderpath\profile_username.vhdx (you can make vhd or vhdx here)
$vhd = Join-Path $nfolder ("Profile_"+$sam+".vhdx")

frx.exe copy-profile -filename $vhd -sid $sid
Dec 202017

In my environment I have nearly 100 persistent MCS VDAs. Our developers, contractors, and even us IT people need the ability to change things on our VDI desktops (install/uninstall/set registry/create local IIS sites/etc.) and have them persist reboots. Luckily, the only software I have to maintain on these desktops is the VDA itself. That still means when Citrix releases a new version that I want to move to, I have to upgrade almost 100 individual desktops. The last time I had to do this I wanted to rip my hair out, and decided to write a script to automate the task. It has saved me TONS of time, so I want to share it in hopes that it can save someone else some time (and hair).

I wrote the script specifically for my MCS environment, which runs on XenServer, but with a little tweaking it can be modified for any environment.

How it works (the short version):

  1. Gets the computer(s) you set – can be manual input or a delivery controller query
  2. On each computer it will create 2 scripts, and 2 scheduled tasks
    1. The first script loads auto logon info into the registry
    2. The second script handles the vda removal and install
  3. One scheduled task runs once to run the first script and reboot
  4. The second scheduled task runs the second script at logon (first script sets up a user – in my case the local administrator – to logon automatically)

Most things are explained in the script, and this time I’ve also created a youtube video to go through it all/show it in action.



Oct 242017

NOTE: This only works between like profile versions.  eg. You can’t migrate your 2008R2 profiles to Server 2016 and expect it to work.  See this chart.

I moved from UPM to FSLogix earlier this year, and decided to write my own powershell script to convert the UPM profiles to .vhd.  FSLogix has its own conversion process (which I didn’t find a whole lot of info on), but I decided to create my own.

What this script does:

  1. Gets a list of all UPM profile directories in the root path (that you supply) and displays them to you to select which one(s) you would like to convert via out-gridview
  2. For each profile you select:
    1. Gets the username from the profile path – you will have to edit this part for your environment… explanation in the script
    2. Use the username to get the SID (FSLogix profiles use the username and sid to name the profile folder)
    3. Creates the FSLogix profile folder (if it doesn’t exist)
    4. Sets the user as the owner of that folder, and gives them full control
    5. Creates the .vhd (my default is 30GB dynamic – edit on line 70 if you wish to change it) – if it doesn’t exist (if it does skip 7, 9-10)
    6. Attaches the .vhd
    7. Creates a partition and formats it ntfs
    8. Assigns the letter T to that drive (edit on line 73 if you wish to change that)
    9. Creates the T:\Profile directory
    10. Grants System/Administrators and the user full control of the profile directory
    11. Copies the profile from the UPM path to the T:\Profile directory with /E /Purge – if you are re-running this script on a particular profile it will overwrite everything fresh
    12. Creates T:\Profile\AppData\Local\FSLogix if it doesnt exist
    13. Creates T:\Profile\AppData\Local\FSLogix\ProfileData.reg if it doesn’t exist (this feeds the profilelist key at logon)

Here is the script!

Jul 112017

I have been using Full Desktop’s inside of XenApp forever. Lately I have been working on a project where I will be using only published apps. We are a CSP and a managed service provider who uses LabTech (Now ConnectWise Automate) to control all of our systems. LabTech uses a great remote access product called ScreenConnect to connect to the systems. All of this works flawlessly inside of a full desktop. However, when I published LabTech as a seamless app (LTClient.exe), everything seems to work fine except for ScreenConnect. I got a great Citrix engineer on the line who actually used all of the collected data I uploaded and troubleshot the issue. ConnectWise is actually a “ClickOnce” application which leverages dfsvc.exe to install and launch ScreenConnect. You can read this super exciting article on ClickOnce applications here.

Technically Citrix, nor Microsoft support any of these ClickOnce applications. Kudos on the Citrix engineer for continuing to work the issue with me, even though this is true. Luckily I already built a 2012R2 RemoteApp environment and was able to get this working to show Citrix this was not an application issue, but a Citrix seamless app issue. During troubleshooting he pointed me to this interesting article on ClickOnce and XenApp 6.5 here. I’m on 7.6 LTSR CU3, but still a good article on how this stuff works.

Anyway, after looking at the procmon information in the ticket, he quickly found that in the working scenario dfsvc.exe was calling ScreenConnect.WindowsClient.exe, where the seamless app was not. His “solution” was to simply run dfsvc.exe before calling LTClient.exe. Not really a “fix” but hell, it worked! So, I created a simple powershell script.

start-process "C:\Windows\Microsoft.NET\Framework\v4.0.30319\dfsvc.exe"
start-process "C:\Program Files (x86)\LabTech Client\LTClient.exe"

Lastly, I added dfsvc.exe to LogoffCheckSysModules per this CTX article.






Sep 302016


XenApp 7.6

700+ Delivered (published) Applications

60+ Windows servers (2008 R2 and 2012 R2)



Recently I had a request to replicate 100+ applications from PROD to QA, using QA server configured with identical applications and identical application locations/paths. Obviously all paths to EXE files need to be the same in order for this to work 100% (unless I missed a memo and XA can now support publishing of identical applications from various paths.  As far as I know, this was not yet available in 7.6).  If QA server has some of the applications in different paths, not all is lost. You can still use this process and script to migrate large number of applications between delivery groups and then modify paths later in Studio.

While I could add few more lines to my PoSH script to actually replicate each application at a time, amount of time it took me to create this script and ability to duplicate applications in Studio seemed unnecessary.

My goal was to replicate, or proper Citrix term would be duplicate all applications and then assign them to another delivery group. Seems simple enough for Citrix and PoSH guru. But for those who are just getting their feet wet could use following process to speed up their delivery time to less than 5 mins and go look for the end of the Internet, while telling client it took you hours 😉



1 – I will be duplicating all requested applications using ol’ Citix Studio.

2 – I will run script below to change Delivery Group and application folder, as visible by the user (you mileage might vary, depending on your requirements)

This script/process is no rocket science, but might help someone to quickly replicate applications and migrate them to another delivery group, instead of publishing them over again.  Modify script below according to your environment before running it.  (WARNING: It is fairly simple script, so review and try to understand exactly what this script is doing, before executing it.)  Also, I am no expert when it comes to creating powershell scripts, but just another Citrix admin.  So, pardon if you can make it better.  Please do improve and share!  I am all for helping fellow Citrix admins anyway I can.  Even if it’s buying a pint!


Step 1

citrixirc1Create alternative application folder in Studio.  For our scenario I am going to create folder named “QA” inside already created “Europe” folder.

Right-click on all applications that you need to replicate in QA (you can select multiple applications at once).

Click Duplicate Application 

Now select all duplicates and drag them over to QA folder.  In my scenario I will be dragging these to Europe\QA.

Step 2

Below script will prompt for the admin folder name where all the duplicates reside (that’s the new folder you just created.  In my example it’s called Europe\QA).  I repeat- do not select your production applications folder, as script will move all your production apps to new delivery group.  Use newly created QA folder where you moved all duplicate applications to in step 1 above.

It is assumed that new delivery group is already created.

Another item to note; there is an optional line (in yellow) to change client-side folder location of newly created applications.  This is to help users identify whether they are running PROD or QA applications. It also looks cleaner in Storefront or WI.  You can add more commands into Foreach loop.  Things like modifying users who have access, or changing actual name of the application and etc.  My goal was to keep all the same and just deliver from QA server.


asnp Citrix*

$adminfolder = (Get-BrokerApplication -MaxRecordCount 10000).AdminFolderName | sort | select -unique | Out-GridView -Title "Select Admin Folder Name" -OutputMode Single
$applist = Get-Brokerapplication -AdminFolderName $adminfolder
$originalDG = (Get-BrokerDesktopGroup -MaxRecordCount 10000).Name | sort | Out-GridView -Title "Select Original Delivery Group Name" -OutputMode Single
$newDG = (Get-BrokerDesktopGroup -MaxRecordCount 10000).Name | sort | Out-GridView -Title "Select New Delivery Group Name" -OutputMode Single

Write-Host "Migrating all applications in $adminfolder`nFrom $originalDG Delivery Group to $newDG Delivery Group" -ForegroundColor Green

foreach ($app in $applist.ApplicationName){
                Write-host "Migrating $app"
                Get-BrokerApplication -ApplicationName $app | Add-BrokerApplication -DesktopGroup $newDG
                Get-BrokerApplication -ApplicationName $app | Remove-BrokerApplication -DesktopGroup $originalDG
                Get-BrokerApplication -ApplicationName $app | Set-BrokerApplication -ClientFolder "Europe\QA" #optional to show all applications inside QA folder and not in the same folder with production apps


BTW, using similar add-brokerapplication command you can publish, or rather deliver same application from multiple delivery groups.  Just comment out remove-brokerapplication command and it will now launch from servers in prod and qa, or any other DG of your choice.  Comes really handy when you have multiple DGs that host different applications, but some of the applications are identical.  You can spread the load across multiple DGs.  Think of it as a worker groups concept in XA 6.x with server groups.   I had such requirement that was easily achievable in XA 6.x, but not so much in XA 7.x.  I paid for someone’s case of beer when they told me that I can use above mentioned command to deliver same application from multiple DG’s, as it’s not clearly documented by Citrix. There is a surprise…

That’s all folks. My first ever citrixirc blog.  Whoo-hoo!

Over and out.

Apr 122016

Thank you Microsoft for changing fundamental things about your operating system, with little or no regard to those of us running in an RDS/XenApp type environment. Check out this technet article. In this article it states how changes have been made.

“In  Pre-Win 8, apps could set the default handler for a file type/protocol by manipulating the registry, this means you could easily have a script or a group policy manipulating the registry. For example  for Mailto protocol you just needed to change the “default” value under HKEY_CLASSES_ROOT\mailto\shell\open\command”

More importantly, you were able to use Group Policy Preferences (GPP) to set these values inside a GPO. You could also Item Level Target (ITL) them by using the GPP. This means you could easily have users run Acrobat Pro for .pdfs on SecurityGroupA and Adobe Reader for .pdfs on SecurityGroupB. However, the technet article goes on to say that in post Windows 8,

“the registry changes are verified by a hash (unique per user and app) “

A little more digging tells us that the new hashing mechanism is also on a per-machine basis. This means that a hash would be different for each user, per app, per XenApp server. Very inconvenient and annoying. This also means that we can not use the built in GPP functions in Active Directory to set these file type associations. Also very inconvenient and annoying.

James Rankin did a great blogpost on this subject as well. You can read that here. He did a great job overviewing this issue and provided a solution with using AppSense. This blog will show you how to do this with good old batch scripting and group policy. To be honest, I’m quite annoyed that I had to put together this “hack” to get around something that worked PERFECTLY FINE in 2008R2 with GPPs. If anyone has a more elegant solution, I’d love to see it. I’m not the best scripter in the world, but I’m very pragmatic. “It works”

The first thing we want to do is create a logoff script to delete HKCU\Software\Microsoft\Windows\CurrentVersion\Explorer\FileExts\.pdf. However, because of that pesky Deny in the UserChoice key, we are unable to simply do this. So, I have made a simple “regini” command to overwrite the permissions on that key so that we can delete. In my environment, I have created FTA_PDF.txt in the NETLOGON directory. Inside this file is simply a registry value and some codes, which allow SYSTEM, Administrators, Interactive User, etc, FULL CONTROL of the key.

Next, I create a FTA_Delete.bat file in NETLOGON. This runs the “regini” command to change the permissions, then a “reg delete” command to delete the key.

Then we need to create the script for the logon process. I’ve busted out good old “ifmember” for this one. It’s a simple executable that will check AD group membership. My script is pretty simple. It checks to see if a user is a member of the Acrobat group. If so, run the “reg add” to add the association to Acrobat. If not, it falls back to the default .pdf reader in this environment. In this case, it’s Adobe Reader. Keep in mind that you can add multiple programs and associations using this method. You can add Foxit here if you would like.

So, the sad fact of the matter is when I tried to set this as an actual “Logon Script” the functionality didn’t work. I had to set this in a User GPO: Administrative Templates\System\Logon\Run these programs at user logon. I’m also the type of person that hates to see a CMD window flash up on the screen right after I login. So, I wrote ANOTHER script called FTA_Starter.bat to call this script to run in a minimized window.

This is the script I added to the GPO.

So, I fought with this for a long time and it wasn’t working. I had to re-read James’s blog and found this little blub at the bottom.

“Update – I built a third XenApp server, just to be sure, and this time the solution wouldn’t work until I removed the Registry key HKLM\Software\Classes\.pdf.”

This DID NOT WORK until the HKLM key was deleted from the servers. Do not forget this step.

I hope this helps you work through this issue in less time that it took me to do it.