David Ott

Mar 162015

UPDATE:  I found trying to run this script through the Netscaler Gateway failed due to differences in the web pages.  I re-wrote the script so it will work internal directly to StoreFront, and externally with Netscaler Gateway.  The main caveat is that the wficalib.dll doesn’t allow you to logoff the session when going through the gateway.  I simply set it to look for any wfica32 processes prior to launching the application/desktop, compare that to the processes after launch, and kill the new process (disconnecting the session).  The script identifies the webpage as a gateway if */vpn/* is in the path.  I also set it to logoff of the Storefront/Gateway page when it ends the script.  If you were to run it back to back without logging off of the page it wouldn’t find what it is looking for initially and fail (because you’d probably already be logged on).  I may re-write this in the future with more functions to make it a bit shorter/cleaner, but as is it should work.
NOTE: I tested this with Netscaler 10.5… it may not work with previous versions as is, but if you read the script you should be able to figure out what needs to be changed.

I need to give credit to this Citrix blog post for getting me started.  This script will launch an application or desktop as a user you specify from the StoreFront web page, then send you an email to let you know if it was successful or not.

Variables you should edit:

In the send-results function
$smtpserver (line 19)
$msg.From (line 28)
$msg.to.Add (line 29)

In the main script (be sure to read the comments)
$username (line 84)
$passstring (line 86)
$resource (line 92)
$mask (line 94)
$wait (line 96)
$internetexplorer.visible (line 98)
$internetexplorer.navigate2 (line 99)

Powershell (x86)! – otherwise you cannot tie into the x86 dll for Receiver
(If you are going to the Netscaler Gateway it doesn’t matter which version as we won’t tie into the dll in that case)

Add the following to the registry if you are pointing to the internal StoreFront url – if you are pointing to the Netscaler Gateway it won’t matter
Windows Registry Editor Version 5.00


If you are running on an x86 machine change the registry path to exclude Wow6432Node, and change the path of the .dll on line 156 of the script to the correct path of the wficalib.dll.

Here is the script!

Contact me in in the channel David62277

Dec 192014

Like the title says this is for a very specific use case which I have run into, so definitely not for everyone.


The company I work for plans on switching the user home directories to a DFS path in order to accommodate our cloud DR solution. We currently redirect folders via Citrix policies in XenDesktop to the user home directories (ie: \\server\path\username). This path is to change to the DFS path as well, so when a user logs onto the cloud Desktop as a Service (DaaS) desktop their folders are redirected like normal. The new path will be \\domain\dfspath\userpath\username, but it is really the same place as before (2 different paths pointing to the same place). If you change the Citrix policy to redirect the folders to \\domain\dfspath\userpath from \\server\path it will copy the contents of the old path to the new (which really does nothing because it is already there), BUT then it will delete it from the old path (since the “old” and “new” path are really the same the redirected folders are deleted)!!! Using good ol’ Microsoft redirection policies you can set the policy to not move data, but with Citrix policies I don’t see this option.

Basically, if I switch the policy to point the redirected folders to the new DFS path, a user logs on, and all their redirected folders go *POOF*. In our environment that’s pretty much everything but AppData.

I screwed around with the registry a bit and found if you delete the values under HKCU:\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders and User Shell Folders that point to the share along with the History key under HKCU:\Software\Citrix\UserProfileManager\FolderRedirection, the folders are redirected fine after the DFS path is in place with no problems.


The Script

My choices after finding the work around were to:

  1. Go the easy route – Delete all the current profiles
    1. Obviously that won’t fly
  2. Load each NTUSER.DAT file into the registry one at a time and edit them
    1. With so many profiles that would take forever
    2. Would need a very long window as I couldn’t have users logging on/off hosing my work and their redirected folders
  3. Get a window to disable logons to our XD environment, get everyone logged off, and use a script to load every ntuser.dat file/edit/unload
    1. Only need a short window

I chose option 3 and got to work on a script below. Read the comments between the <# #> marks!  As always test test test…

Your best bet is to copy the script below into your favorite powershell editor it doesn’t format very nicely here… I just use powershell_ise.exe



$start = Get-Date <# Get the start time to calculate how long the script took at the end #>
<# we get the folders under the root of the profile store (full names), for each append the path to where the NTUSER.DAT would be found, and save that as the $profiles variable. THIS WILL ALMOST CERTAINLY BE DIFFERENT FOR YOU, SO FIGURE OUT YOUR PATH #>
$profiles = gci \\server\profiles | ?{$_.psiscontainer -eq $true} |select -expand fullname | sort | %{Join-Path $_ "v2x64\UPM_Profile\NTUSER.DAT"}
<# This is where the fun starts. If you want to test this against a single test user profile comment out the $profile variable above with a pound sign before it, and make a new line like this: $profiles = "\\server\profiles\testuser\upm_profile\ntuser.dat" #>
foreach ($profile in $profiles) {
<# We check to see if the NTUSER.DAT file exists... if not it continues on to the next profile in the list #>
if ((Test-Path $profile) -eq $false) {continue}
<# This checks for completed jobs and outputs the messages to the screen (job starts in a sec). Then removes the completed job #>
if ((Get-Job -State Completed).count -gt 0) {
$jobs = Get-Job -State Completed
$jobs | Receive-Job
$jobs | Remove-Job
<# This sets the max running jobs to 10 - adjust this at your own risk - ie: if you change the 10 to 100 it will load 100 NTUSER.DAT files into the registry at a time ... that could be bad resource wise #>
while ((Get-Job -State Running).count -ge 10) {
Start-Sleep -s 1
<# The hive variable below is important... this is the name the NTUSER.DAT hive will get in HKLM. YOU WILL HAVE TO PLAY WITH THIS SO YOU GET IT RIGHT IN YOUR ENVIRONMENT. in my case it will name the hive username_temp - ie: user1_temp if the username is user1 #>
$hive = (($profile -split ".domain") -split "\\")[2] + "_temp"
<# Here we start the job - jobs run a separate instance of powershell, so it will load multiple hives. In this case there will be 10 hives loaded into the registry at a time (DO NOT OPEN REGEDIT WHILE THIS SCRIPT IS RUNNING... IT WILL STOP THE HIVES FROM UNLOADING) #>
Start-Job -name $hive {param($profile,$hive)
<# Here we start reg.exe with arguments to load the user hive, and saves the process info as $load.  Then it checks the exitcode so it can tell you if it loaded
or not#>
$load = start-process -passthru -filepath reg.exe -argumentlist "load HKLM\$hive $profile" -Wait -WindowStyle Hidden
if ($load.ExitCode -ne "0") {
Write-Host "$profile could not be loaded" -f Red
} else {
Write-Host "$profile loaded"
<# $change variable comes into play in a sec... default is 0 #>
$change = "0"
<# Here we set the location to the registry path where the shell folders and user shell folders keys live #>
Set-Location hklm:\$hive\Software\Microsoft\windows\CurrentVersion\Explorer
<# We search for any key which has *shell* in the name (shell folders and user shell folders).  Then we get the key path of those keys, and all the properties
under those keys, and look for any values that are pointing to the server where the current share is hosted.  Once we have those we remove the property, and set
$change to 1... indicating that the shell folders were modified#>
gci .\ | ?{($_.psiscontainer -eq $true) -and ($_.name -like "*shell*")} | %{
$regpath = $_.pspath
$properties = $_.property
foreach ($property in $properties) {
$val = (Get-ItemProperty $regpath).$property
<# edit your server name here  #>
if ($val -like "*server*") {
$change = "1"
Remove-ItemProperty -Path $regpath -Name $property
<# If $change is 1 then we remove the folder redirection "history" key under the Citrix path #>
if ($change -eq "1") {
Set-Location HKLM:\$hive\Software\Citrix\UserProfileManager\FolderRedirection
if ((Test-Path .\History) -eq $true) {
ri .\History -Recurse -Force
<# We pop-location to leave the registry paths, do some cleanup so that we can unload the hive cleanly, and attempt to unload it.  If it fails it will try up
to 5 times to unload the hive.  And will let you know after the job runs #>
gci env: | Out-Null
gci variable: | Out-Null
$att = 0
while ((Test-Path HKLM:\$hive) -and ($att -le 5)) {
$unload = start-process -PassThru -FilePath reg.exe -ArgumentList "unload HKLM\$hive" -Wait -WindowStyle Hidden
$att += 1
if ($unload.ExitCode -ne "0") {
Write-Host "Unable to unload $profile" -f Red
} else {
Write-Host "$profile successfully unloaded"
<# This cleans up the NTUSER.LOG files that get created when the hive is loaded #>
gci (Split-Path $profile -Parent) -Force | ?{($_.psiscontainer -eq $false) -and ($_.Name -like "ntuser*") -and ($_.name -ne "ntuser.dat") -and ($_.name -ne "ntuser.ini") -and ($_.Name -ne "ntuser.pol")} | ri -Force
} -ArgumentList $profile,$hive | Out-Null
Get-Job | Wait-Job | Receive-Job
Get-Job | Remove-Job
$end = Get-Date
$minutes = ($end - $start).Minutes
$seconds = ($end - $start).Seconds
Write-Host "Total run time $minutes minutes $seconds seconds."

Oct 312014

I have seen where PVS targets (mainly Desktop OS) will fail to activate via KMS after booting, and/or not get the proper group policy settings.  I think this is because PVS hasn’t released the network when Windows is trying to activate/update gpo (or something along those lines).  On top of this in my environment I have PvD and Random desktops booting off of the same vdisk image.  To fix this I created the script below, and setup a scheduled task to run at startup (using SYSTEM account).

Note: Using this script you can do a lot more than just slmgr /ato and gpupdate /force commands.  For instance if you have an antivirus service that you just want to start if the vdisk is in standard mode… you could just add a “start-service” command (of course you’d want that service to be set to manual).  Feel free to edit however it suits your environment.

Steps to implement:

  1. Start a maintenance version of your vdisk
  2. Logon to that desktop/server
  3. Open powershell_ise, or notepad
    1. Copy the script below and paste it
    2. Edit line 2 to be the FQDN of your domain
      Example: yourdomain.com
    3. Save it (remember where you saved it) – I just save mine to the root of C:\ to keep it simple
  4. Open Task Scheduler
    1. Right click on “Task Scheduler Library” and select “Create a Basic Task”
    2. Name your task and optionally add a description and click Next
    3. On the Task Trigger screen select “When the computer starts” and click Next
    4. On the Action screen click Next (Start a program should be selected by default)
    5. On the Start a Program Screen
      1. Type the path, or browse to powershell.exe (C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe)
      2. in the Add arguments section:
        1. -executionpolicy unrestricted -file <path to the .ps1 file you just saved>
          Example: -executionpolicy unrestricted -file c:\startup.ps1
    6. Click Next
    7. Check “Open the Properties dialog for this task when I click Finish” and click Finish
    8. On the properties page click Change User or Group
      1. In the Select User or Group box type in “system” (no quotes) in the box and hit OK
      2. You should now see “NT AUTHORITY\SYSTEM” as the user account to run as
    9. Check the Run with highest privileges box, and click OK
  5. Perform any cleanup operations you typically do, run PvD inventory (if you use PvD), and shutdown the machine
  6. Place your vdisk into Test mode, and test away
  7. When satisfied set the vdisk to production

function startup {
while ((Test-Connection "fqdn of your domain ie: contoso.com" -count 1) -eq $null) {
Start-Sleep -Milliseconds 500
& cscript.exe c:\windows\system32\slmgr.vbs /ato
& gpupdate.exe /force
$p = gc c:\personality.ini
$r = (Get-ItemProperty registry::'HKLM\SOFTWARE\Citrix\personal vDisk\config').vhdmountpoint
if (($r -eq $null) -and (($p -like "*diskmode=p*") -or ($p -like "*writecachetype=0*"))) {

Explanation of the script:

When executed it will get the content of c:\personality.ini and the value of REG_SZ vhdmountpoint.  If personality.ini contains diskmode=p or writecachetype=0 and vhdmountpoint value is blank/non-existent it will stop the script (this indicates the vdisk is in private or maintenance mode).

PvD – value of vhdmountpoint will not be blank, so even if for whatever reason the .ini file shows the disk in private/maintenance it will go on and run the function
Shared Random – value of vhdmountpoint will be blank, but the .ini should show diskmode=s and writecachetype=something other than 0 (depends on the mode), so it will also run the function.

If the break condition is not met (indicating the disk is in shared mode) then it will run the startup function.  This function tries to ping the fqdn of your domain 1 time.  If it gets a reply it will run the activation command, and gpupdate.  If it does not, it will wait half a second and try again… over and over until it gets a reply from the fqdn of your domain.


Sep 252014

A while back I wrote a script to quickly update a XenServer host or pool with all the hotfixes placed in a directory. I got tired of doing it through the gui, and having to do each update one at a time… waiting for reboots in-between.
This script will look for .xsupdate files in the directory you specify in the script. Each one will be uploaded to the pool master (or single host), and applied. After that it will reboot each XenServer one at a time.
The reboot process will disable HA and WLB (if you have it) – otherwise it will just throw an error and continue. Then one at a time starting with the pool master they will switch to maintenance mode, migrate vms off, and reboot. Once the host that rebooted is back up and enabled it will move to the next.  At the end it will re-enable HA and WLB.

The only requirement is that you have XenCenter installed on your workstation (and of course powershell with an execution policy that allows scripts to run).

The only “issue” is the last host to reboot will remain without VMs until you move them back manually, or a load balancing function moves them.

As always… test before trying this in a production environment.

function checkconnect {
Write-Host "Waiting for $item to reboot and exit maintenance mode."
$check = &$xe -s $master -u root -pw $pass host-list name-label=$item params=enabled --minimal
if ($check -eq $true) {
write-host "$item is online"
} else {
Start-Sleep -s 10

$xe = “c:\program files (x86)\citrix\xencenter\xe.exe” # if a x86 machine it will be c:\program files
$pass = “ROOT PASSWORD” # password to connect to the pool master
$master = “IP or Hostname” # ipaddress or hostname of the pool master
$patchpath = “C:\XSUPDATES” # path to the .xsupdate file(s)
$patches = gci $patchpath | where {$_.name -like “*.xsupdate”} | select -expand fullname
$pool = &$xe host-list -s $master -u root -pw $pass params=name-label –minimal
$puuid = &$xe -s $master -u root -pw $pass pool-list –minimal
$hosts = $pool -split “,”
foreach ($patch in $patches) {
$uuid = &$xe patch-upload -s $master -u root -pw $pass file-name=$patch
&$xe patch-pool-apply -s $master -u root -pw $pass uuid=$uuid
write-host “Disabling HA and WLB.”
&$xe -s $master -u root -pw $pass pool-param-set wlb-enabled=false uuid=$puuid
&$xe -s $master -u root -pw $pass pool-ha-disable
foreach ($item in $hosts) {
$hostuuid = &$xe -s $master -u root -pw $pass host-list name-label=$item –minimal
write-host “Placing $item in maintenance mode.”
&$xe -s $master -u root -pw $pass host-disable uuid=$hostuuid
write-host “Migrating VMs off of $item”
&$xe -s $master -u root -pw $pass host-evacuate uuid=$hostuuid
write-host “Rebooting $item”
&$xe -s $master -u root -pw $pass host-reboot host=$item –force
write-host “Enabling HA and WLB.”
&$xe -s $master -u root -pw $pass pool-ha-enable
&$xe -s $master -u root -pw $pass pool-param-set wlb-enabled=true uuid=$puuid
write-host “Pool update complete.”


Sep 182014

I recently ran into an issue where Citrix Profile Manager was not catching all the files from a user installed Office Add-in. I found the path to the files and they were in %localappdata%\Apps\2.0. Even if I specifically added a policy to sync that folder it still did not get every file needed for the Add-in to work. After banging my head against the wall trying to get profile manager to handle it I decided to write a logoff script to “backup” that folder to the user’s home share at logoff, and a logon script to “restore” it at logon. That worked, but it delayed logon/logoff as those files were copied to/from the user profile.

On top of that issue, I also had the need to redirect Chrome and Firefox cache directories to user home shares as they were soon to be installed on our VDI image. Of course, you can do both without the script you are about to see (GPO in the case of Chrome, and an .ini file with Firefox). I just figured kill three birds with one stone.

The answer ended up being very simple. Junction points! For those of you who may not know a junction point (aka reparse point or symbolic link) is basically a shortcut that Windows treats as a folder. A normal shortcut to “\\server\share\path” on your desktop would show that path in the address bar if you clicked on it. A symbolic link would show C:\Users\<username>\Desktop\<name of the link>. This allows you to basically “redirect” a specific folder. For those of you who have a “crappy app” that points its data to %userprofile%\AppData\<path> instead of %AppData% and prevents you from redirecting AppData this may help you as well.

Now all I need is a logon script (powershell) to create these junction points.

Requirements for this script:
1. Users must have the “Create symbolic links” right (set in GPO)

2. Users must have a Home Folder (if they don’t you could simply rewrite the script to create a folder somewhere using $env:username in the path).

Detailed explanation of what this script does:
1. Sets the location to C:\ so when it calls cmd it won’t complain about the path (assuming the script is going to run from a network share)

2. Imports a .csv file with the information
This .csv file has headers “localpath”, “homepath”, and “LorR”
localpath = the folder within local or roaming appdata. If the path is down inside somewhere make sure you include the whole path. In my case I want to get %localappdata%\Apps\2.0, so the localpath would be “Apps\2.0”
homepath = the path in the users home directory where you want the files located. Same as localpath you don’t include the entire share path… just the path you want it to create inside the share.
LorR = local or roam – so the script knows where to put the junction points.

3. For each line in the .csv file calls a function that will:
a. Decide if the path is in local or roaming appdata (LorR)
b. Create the folder structure on the user’s home folder
c. Checks if the local folder already exists. If so, checks to see if it is already a junction (which it shouldn’t be). If the path is *2.0 (path to the Office Add-ins) it moves those files to the home folder, and creates the junction. If it is anything else it deletes the folder (would be user installed chrome/firefox cache directories), and creates the junction.

After running this at logon my user now shows this in %appdata%

If I open Mozilla it shows to be local, but it is actually pointing to my home folder


The only “issue” I have seen thus far is Chrome will warn that the cache is on a network share the first time it is launched.

Below is the powershell script. Feel free to edit it to fit your needs, and make sure you test thoroughly before implementing into any production environment.