At home my primary machine is a desktop running Windows (I’m a gamer) but my laptop that I use for travel and other stuff runs Linux. I had been surviving on a basic copy-my-music-to-Linux configuration and never bothered to create the playlists. It’s exceedingly annoying that there is no feature to export all of your playlists at once, or import them all at once with Rhythmbox. I have over 100 I’ve created over the years and that’s far too many to do manually. I figured there must be a way to do this via PowerShell so I set out to figure it out.
Mind you, I’m using an old version of iTunes, 12.0.1.26. When they updated the GUI to have that huge empty space I said forget it… rolled back… and ignore the periodic messages to update. I’m blocked from buying through the store because of the version but that’s fine, I don’t actually buy via their store and don’t plan to. But just keep in mind this may not work if they’ve changed the XML structure for the library in newer versions.
Rhythmbox version is 3.4.4
Going into this, I thought it was going to be difficult but it turned out to be quite simple. The longest part was examining the Libary.xml file to figure out it’s structure. Turns out, it’s very well structured and easy to access each entry with dot notation. I built a few dummy playlists in Rhythmbox so I could see it’s structure, as well as find the file where they were stored ( default dir: /home/USERNAME/.local/rhythmbox/playlists.xml).
Initially I exported each playlist as a separate file in the format Rhythmbox wanted because I had installed a plugin that let you import them as .pls files. This turned out to be way too cumbersome with the amount of playlists I had so I revisited the playlists file for Rhythmbox and decided to write the entire file with the script. I left both methods in the script so you can do either or, in case you have an established Rhythmbox presence you don’t want to overwrite.
First thing you need to do is export your library to some file on your machine, File > Library > Export Library.
Update it’s path in the first line of the script below. Also change the two occurrences of “USERNAME” in the script with whatever your username in Linux is. Or, if you’re not using the standard Music path in your Home dir, update the entire path. Just be sure to keep the beginning f:// intact.
[XML]$file = Get-Content "C:\iTunesToRhythmbox\Library.xml" $outpath = "C:\iTunesToRhythmbox\" $xmlPlaylists = $file.plist.dict.array.dict ForEach ($list in $xmlPlaylists) { $playlistName = $list.string[0].Replace("&","%26").Replace(" ","%20") If ($playlistName -eq "Library" -OR $playlistName -eq "Music" -OR $playlistName -eq "Podcasts") { continue } $trackIds = $list.array.dict.integer If ($trackIds.count -eq 0) { continue } $plPath = "$outpath\$playlistName.pls" $xmlPath = "$outpath\playlists.xml" Add-Content $xmlPath " <playlist name=""$playlistName"" show-browser=""true"" browser-position=""180"" search-type=""search-match"" type=""static"">" $playListCount = 1 ForEach ($id in $trackIds){ $trackIndex = $file.plist.dict.dict.key.indexOf($id) $trackData = $file.plist.dict.dict.dict[$trackIndex] $trackName = $trackData.string[0] # My iTunes music sits in a folder called iTunes (and I copy all of it's contents over to Linux) # so I'm splitting the path based on that, as everything to the right of it will match the partial path in Linux. $filePath = ($trackData.string[-1] -split "iTunes/")[1] Add-Content $xmlPath " <location>file:///home/USERNAME/Music/$filePath</location>" $playListCount++ } Add-Content $xmlPath " </playlist>" }
If you’re using the method of replacing the entire playlists file, it goes in (default directory): /home/USERNAME/.local/share/rhythmbox/playlists.xml. Otherwise, use a plugin to import the .pls files one by one….
The biggest issue I ran into that took a while to figure out why it wasn’t working, was certain characters that Rhythmbox, or Linux, didn’t like. At first when I was modifying the playlist file it wasn’t showing any of my playlists. Then after I closed out it had re-written the file entirely! It turns out if & and spaces are in the names or paths, it doesn’t like it and sees it as corrupt, and recreates it. So I added a couple of character replacements in the script and it finally worked.
Also note that I’m skipping over playlists titled “Library”, “Music”, and “Podcasts”. These are three of the built-in playlists that iTunes creates and the first two are playlists that contain EVERYTHING. Definitely don’t need playlists that just include my entire music library. I also don’t copy over any Podcasts so no need for that, either.
I recently converted an archaic setup to a more modern, automated process. We had a Powershell script that one person would run on their workstation to update a DB that held all of our Citrix servers that Ansible would query for host information. Rather than keep a static host file that needed to be updated manually, a Python script would run and query the database and pull the host information the playbook was asking for. It was a Powershell script that used the Citrix cmdlets to query our environment and populate the database.
The individual left and the process had been in a forgotten state and we almost lost the script because it was only on his old PC. I grabbed it and for a time I was the one running it. We’re starting to use GitHub for our department and we’ve been using Ansible and Jenkins to patch servers for a while… so I decided to create a modern, automated process for this.
Here are some prerequisites to make this work:
1. The account Jenkins runs as needs write permission to your Ansible role folder (This isn’t strictly necessary but I organized the files this way, as you’ll see). I did this by changing the group on the folder to one containing myself and the Jenkins account.
Ex) chgrp team_admins /etc/ansible/roles/myrole
2. The workstation(s) the PS code runs on needs WinRM enabled
enable-psremoting -force
3. You need a host file in Ansible containing the workstation(s) you’ll be using.
4. You need to create an access token in GitHub so Jenkins can make a pull request.
This is found under your account on GitHub > Settings > Developer Settings > Personal access tokens
Generate a new one and give it whatever permissions you want, mine only has “repo” checked.
ANSIBLE CONFIGURATION
HOST FILE
Here’s an example host file, you can save it wherever you want, just note where it is so you have the correct path in the Jenkins config.
workstations.yml
---
powershell_devices:
hosts:
MYWORKSTATION:
MYWORKSTATION2:
You can add as many hosts as you want but when the Jenkins job runs, the way I’ve set it up, it will only run on the machine you pass to the Jenkins parameter. Even if you only have one machine you need a host file.
ROLE
Create a new folder structure. Here is an example main.yml for the tasks folder. The below role first checks if the path I want to copy it to exists and will create it if it doesn’t. It then copies the PS1 file from the roles directory to the local machine. It then runs the script from the local machine. You may need to either change the Execution Policy on the local machine or use -ExecutionPolicy ByPass in the win_shell. That would then change win_shell to: powershell.exe -ExecutionPolicy ByPass -File C:\ansible\scripts\AnsibleCitrixInventoryQuery.ps1
--- # Run DB update for Ansible's Citrix workflows
- name: Check if Scripts dir exists
win_stat:
path: C:\ansible\scripts
get_checksum: no
register: scripts_exists
- name: Create Scripts dir if not exist
win_shell: mkdir C:\ansible\scripts
when: not scripts_exists.stat.exists
- name: Copy script to local PC
win_copy:
src: "{{ role_path }}/files/AnsibleCitrixInventoryQuery.ps1"
dest: C:\ansible\scripts\AnsibleCitrixInventoryQuery.ps1
- name: Run the inventory script
win_shell: C:\ansible\scripts\AnsibleCitrixInventoryQuery.ps1
become: yes
become_method: runas
Note that I’m using “become” so that it will execute the script with the credentials passed to the playbook. My Jenkins build will include, under the “Additional parameters”, the path to a credential file that is stored on the Ansible Linux server. This account needs admin rights on the workstation running the PowerShell!
PLAYBOOK
You can then add the role to an existing playbook or create a new playbook with just the role:
---
- hosts:
- "{{ computer }}"
gather_facts: no
roles:
- ROLENAME
{{ computer }} is a reference to the variable it will be passed from Jenkins (setup below). You can name it whatever you want. Since I’m just performing a script run there is no need to waste time on gathering facts.
JENKINS CONFIGURATION
First thing to do is setup a New Item in Jenkins using the “Freestyle project” template.
GENERAL
Check “This project is parameterized” and created a “computer” parameter for accepting the computer name of the remote machine the script will run on.
SOURCE CODE MANAGEMENT
Select Git
For the repository URL you need to use the address to the repo but paste your Access Token at the beginning with an “@”
Ex) https://ACCESSTOKEN@github.com/REPOPATH
Specify whichever branch you are checking out from. If it defaults to /master you’ll probably need to change it to /main
Click Add next to “Additional Behaviors” and select “Check out to a sub-directory”. This is the optional step that just keeps it a bit organized. The path you enter is relative to the workspace directory, which for me is /var/lib/jenkins/workspaces/jobname. So in the path I have ../../../../../etc/ansible/roles/ROLENAME/files. When it does a pull request it drops the files right in the role’s files directory which makes it easy to copy from the server to the workstation running the script.
BUILD TRIGGERS (Optional)
Check “Build periodically” and set your schedule, I did “H 3 * * *”. This makes it run daily during the 3AM hour.
BUILD
“Add build step” and choose “Invoke Ansible Playbook”. Enter the full path to your playbook YAML file.
Select your usual Vault Credentials.
Then click the “Advanced” button and select “Extra Variables”. This is where you’ll pass the Jenkins parameter for the computer onto the Ansible Playbook. The Key is the name of the variable you want for Ansible (ex: computer) and the Value will be the name of the Jenkins variable inside ${} (ex: ${computer}).
Your Additional Parameters should include the host file with your remote machine(s) passed as the inventory file (ex: -i /etc/ansible/hostfiles/your_host_file.yml). It should also include the credential file that contains credentials for the account that has admin rights on the workstation. (ex: -e @/etc/ansible/admin_creds.yml). If you don’t have a file for that use: ansible-vault create admin_creds.yml. This is important as this is the account Ansible will execute all it’s commands as so it needs rights on those machines.
And that’s it. Now we don’t have to worry about the code getting lost when a person or machine leaves. We don’t need to worry about different versions of the script existing in multiple places (and we can track changes). And more importantly this will run daily and keep our database up to date and we can keep an eye on the job.
It took a while to figure out all the permissions because I wasn’t the one who setup Ansible or Jenkins and the account used to run Jenkins shouldn’t be. Hopefully this all makes sense, if not, just comment with your question and I’ll do my best to answer it!
I had been searching for an easy solution to combine a video file with it’s separate audio file but simply Googling that gives a bunch of results that require downloading software from questionable places. I then had the idea to find out if it was possible with FFMPEG (a lot of software uses it anyway). And it turns out it is, with a very simple one liner. I threw it in some Powershell to make it easy to use.
A graphical file selection for both video and audio files. Be sure to update the path to the ffmpeg executable to wherever you put it and whatever starting directory you want for the files. I wrote it so the output just goes to the same directory where the files are but you can easily change that. If you are going to work with files other than mp4 and m4a, then update (or just remove the line) with the filter. Also, because the video alone is an mp4 and the output will be an mp4, I randomly chose to insert a “2-” in the front of the filename if I don’t type one in at that starting prompt. By default it just adds that to the name of the video file, for the combined output file.
Thanks to this page for the ffmpeg command line: https://kwizzu.com/construct.html
Add-Type -AssemblyName System.windows.forms | Out-Null $outputFilename = Read-Host "Enter Output Filename" $ffmpeg = "C:\ffmpeg-4.0.2-win64-static\bin\ffmpeg.exe" $OpenFileDialog = New-Object System.Windows.Forms.OpenFileDialog $OpenFileDialog.showHelp = $true $OpenFileDialog.initialdirectory = "C:\Users\me\Downloads\" $OpenFileDialog.filter = "Videos (*.mp4)| *.mp4" $OpenFileDialog.ShowDialog() | Out-Null $vidPath = $OpenFileDialog.FileName If(!$outputFilename) { $outputFilename = "2-$($OpenFileDialog.SafeFileName)"} $outputFilePath = "C:\Users\me\Downloads\$outputFilename" $OpenFileDialog.filter = "Videos (*.m4a)| *.m4a" $OpenFileDialog.ShowDialog() | Out-Null $audioPath = $OpenFileDialog.FileName Start-Process -FilePath $ffmpeg -ArgumentList "-i ""$vidPath"" -i ""$audioPath"" -c:v copy -c:a copy ""$outputFilePath""" Read-Host "Press any key to exit"
This works especially well for YouTube videos or others where you have to download them as separate files. The best part about this is that it is FAST. I tried it first with the built in video creator for Windows 10, it’s slow and it took a 400MB video file and 40MB audio file and created a 1.4GB combined file… wow. With ffmpeg the output file is just the size of the two combined.
You can easily automate this if you have a lot of work. You can make the script constantly run on your machine, monitor a folder every so often, etc… and have it automatically detect two files of the same name (sans extension) and merge them. Lots of ways to expand this.
Occasionally I get installers from vendors that just don’t play nice. OK… maybe it’s more than occasionally… Often it’s in the form of not having any parameters to do a nice, silent install and other times it’s one little quirky thing. Today that one thing was the uninstall of an MSI that forced a reboot despite using the /norestart parameter. My sripted install can handle reinstalling the app but the removal part kept forcing a reboot so the re-install never happened.
Super annoying and I don’t know why packages do forceful things like this. You really need to let the admin doing the software push decide these things. In my case, skipping the reboot causes no harm. After trying to strategically place Shutdown -a
in my script (and failing to work) I got the idea that, if I initiate my own shutdown then the installer (or uninstaller) can’t force one of their own! So I tried it…
And it turned out I was right. Before I kicked off the uninstaller, which was an MSI, so in the form of msiexec.exe /X{GUID}
, I used shutdown -r -t 240
to schedule a reboot and give the uninstaller 4 minutes to run. The length of time can be anything as long as it’s enough time to get all of the tasks done because we’ll be cancelling it with shutdown -a
once all the tasks are done.
Because Windows will only allow there to be one restart (pending or immediate) any shutdown command the package uses will be ignored and told there is already a pending system reboot.
The one drawback is that the scheduled shutdown is not silent. In my company we use AutoIt so all I did was throw in a line to WinWait for the pending reboot notice to show up and then send an [Enter] to it. There is still an icon in the tray and a balloon pops up when you cancel the reboot but that is fine with me. I’m going to do a system restart at the end of my script anyway.
Hope this helps!
I always love when I’m asked at work to do the maximum imaginable result with the least amount of resources. It’s like being asked to move a mountain and handed a single rope. While not impossible (you could do it in many trips with the rope tied to chunks), it does make you wonder, while you’re laboring, how on Earth you got yourself into this.
Thankfully I was ready for a challenge. Recent projects were the same ol’, same ol’, and I was bored. When I was asked to take the reigns on vetting an app virtualization product to use I was pretty stoked. I spent a few weeks on one product that has a great Cloud capability but we eventually settled on Microsoft’s App-V for various reasons. I was then asked to figure out how to publish apps without the need of a management server and database. 😐
Management doesn’t want the overhead of maintaining those two additional servers…. OK. I took it as a challenge and set to figuring it out. One goal I made for myself was to figure out a way to do it in Powershell that was generic. We use PVS with out Citrix environment but the thought of having to open up an image every time we add or change an App-V package wasn’t very appealing. After all, the goal is the least amount of maintenance and hassle. Mind you, integration for App-V packages is now built in with newer versions of XenApp but we aren’t there yet. So, the initial foray was quite easy and successful. To publish an individual app it only takes 3 lines of Powershell code. However, that’s a specific application and I wanted to go generic. That’s where the fun began.
My inital plan was to write a script that lived in the PVS image and would accept parameters. I’d use the published app in the XenApp console to pass those parameters depending on what app was published. My two parameters were the name of the App-V package and the path to the EXE that needed to be launched. The script was capable of taking the name of the package and building the DFS path to where the packages are stored. The folder names are the same as the package names for ease of use. The script would then add the package to the local disk and then publish it globally. The path to the EXE is, well, to launch the correct app. This didn’t work out so well because the EXE’s for a package are burried deep in the folder structure for App-V and the Windows command line can only accept a command that is 255 characters long or less.
I then dug into the cmdlets for App-V looking for a way to identify executables based off a package. Nothing in the way of EXE’s. I then tried creating shortcuts in a folder at the root of C inside the package’s XML configuration. That failed and it appears an App-V package wont write a shortcut in most directions. It has to be either the public desktop, start menu, or inside the packages folder structure. I figured if I can’t do it by manually editing the XML file, then perhaps I can do it by adding it in the package itself. Nope. It writes the code to the XML file to create the shortcut just like I had manually, but it never actually creates it. I’m assuming this probably isn’t a bug, since virtualized apps are supposed to be contained and writing shortcuts anywhere on the computer would be a mess. It’s probably just an undocumented feature.
After a little bit of feeling frustrated I got the sudden inspiration to return to the XML file. Why? Because the XML file lists every single EXE that was packaged! I didn’t need to pass an entire path because I could just read it from the XML file. So that’s what I did. My second parameter in the Citrix published app was now just the EXE name I needed to launch. The publishing script in the PVS image now, after adding and publishing the package, will read in the XML configuration file and search it for the EXE name being passed to it. Once it finds it, it takes the path for it in the XML, does a little string modification in the way of parsing, replacing, and concatenating, and voila, I have the entire path to the program I need to launch.
It took me about a day and a half to finish the entire script. I added a couple more things like popping up an error message if it encountered a problem adding and publishing the App-V package. Also, a loading screen that cleverly waits for the app to launch and show up before disappearing so the end-user doesn’t think nothing is happening. Script is below. I also published it as my first script to poshcode.org.
The location for the Citrix published app looks something like this: powershell.exe -file C:\Scripts\AppVAddAndPublish.ps1 “AppName” “Exename.exe”
(don’t forget to set-executionpolicy in the PVS image or override it here)
################################################################### ## Description: Generic script for publishing and launching App-V ## packages in Citrix published app is calls the script and passes ## it the name of the package and the name of the EXE to launch. ## App-V packages should be saved in a DFS structure that uses the ## name of the app as the folder. ## ## This script is provided as is and may be freely used and ## distributed so long as proper credit is maintained. ## ## Written by: thegeek@thecuriousgeek.org ## Website: http://thecuriousgeek.org ## Date Modified: 05-01-2016 ################################################################### param( [string]$appName, [string]$exeName ) # Window message to the user so they don't think nothing is happening Write-Host "`n`n Your application is loading..." Import-Module AppvClient # Try and get package to see if it's loaded $package = Get-AppVClientPackage -Name $appName # If the package isn't already loaded, load it If ( !$package ) { Try { # Add and publish package. Make sure to update with your own path $package = Add-AppVClientPackage \\PathToApp-VPackageStorage\$appName\$appName.appv Publish-AppVClientPackage $appName -Global | Out-Null $didComplete = $true } Catch { # If it didn't complete, popup an error message to the user with details $errorMessage = "Error Item: $_.Exception.ItemName `n`nError Message: $_.Exception.Message" $didComplete = $false $wshell = New-Object -ComObject Wscript.Shell $wshell.Popup($errorMessage,0,"Error loading application",0x0) } } else { # Package is already loaded $didComplete = $true } # In order to know when the app has launched, we'll check the number of window titles # This is the count before we try to start the app $windowTitleCount = (get-process | where {$_.MainWindowTitle}).Count If ($didComplete) { # Get the local package directory as set in App-V Config, used for building path to EXE # Powershell can't expand regular environment variables so have to use CMD to echo it back $cmdExpand = "cmd /c echo $( (Get-AppVClientConfiguration | where {$_.Name -eq 'PackageInstallationRoot'}).Value )" $packageRoot = Invoke-Expression $cmdExpand $packagePath = "$packageRoot\$($package.PackageId)\$($package.VersionId)\Root\VFS" # Pull in deployment config because it contains the path to all executables. Make sure to update with your own path [XML]$packageConfig = Get-Content "PathToApp-VPackageStorage\$appName\$($appName)_DeploymentConfig.xml" $packageApps = $packageConfig.DeploymentConfiguration.UserConfiguration.Applications.Application.Id # Loop through each app/exe listed in the config and find the one that contains the $appExe passed to the script ForEach ($app in $packageApps) { If ($app -match $exeName) { $appPath = $app break } } # Get rid of the special characters App-V uses to reference variables in it's application path so we can create the real path $appPath = $appPath -replace "(\[|\{|\}|\])","" #previous regex [^0-9a-zA-Z\\\s\.] $fullPath = "$packagePath\$appPath" Start $fullPath # Check number of window titles again and keep looping until the count is greater than the starting one we took earlier $currentTitleCount = (get-process | where {$_.MainWindowTitle}).Count While ($windowTitleCount -ge $currentTitleCount) { $currentTitleCount = (get-process | where {$_.MainWindowTitle}).Count } }
Of course, now we’re upgrading to 7.8 for Citrix so all of this work is probably going to be useless! It was fun and challenging anyway 🙂
A coworker pinged me the other day needing help with a short script he was running on a C# web page. He was pulling worker group information from Citrix and wanted to sort them based off of the number of servers in each worker group. There isn’t a property on the object returned from Get-XAWorkergroup that includes a server count. He said he googled it and couldn’t find anything online about sorting by the count of a property on an object. I thought he was crazy (both in wanting to sort by server count and that he couldn’t find anything) so I gave it a quick search myself. Surprisingly, there isn’t anything.
I thought for a moment how I would go about doing this and first looked into measuring and object and seeing if I could pipe that to be sorted. Doesn’t work. After wishing “if only there was a property for server count” I remembered the Add-Member cmdlet and the ability to add properties to an object. I love this feature of Powershell and how easy it is to do. It’s more complicated in the programming environment because you’d need to create a new class or extend the existing one. It’s so easy and flexible to do it on the fly when you need to.
I first started by trying to pipe the array of my objects to Add-Member but that failed. Add-Member doesn’t appear to be “smart enough” to recognize you are piping an array of objects to it and so it acts on the array object itself. It wasn’t until after 10 minutes of frustration and realizing this that I figured out why my calculating server count kept coming back as 0. The array itself doesn’t have a property “ServerNames” so it couldn’t give me a count! Powershell can be confusing if you’re not paying attention in that Get-Member (GM) tells you the members of the objects in the array, it doesn’t give information about what the variable is. Always use the GetType() method to determine what exactly your variable is. So, after modifying it slightly I did a ForEach on the array, and inside the ForEach I used Add-Member to get an accurate server count and then add it as a property to each object. After that, it’s just a matter of piping it to sort. Here is the final code, one line:
add-pssnapin citrix.xenapp.commands; get-xaworkergroup -comp socxa65pddc01 | foreach { add-member -InputObject $_ -NotePropertyName ServerCount -NotePropertyValue $_.ServerNames.count -Passthru } | sort -property ServerCount -descending
The one thing to keep in mind is that Add-Member does not output anything by default so you have to use -PassThru so that you have something to pipe to sort!
I couldn’t figure out why a mousedown event was firing 3 times while writing some jQuery code. I googled it and most of the answers that attempted to explain it were saying something to the effect of “an event is getting bound to an element multiple times”… somehow. That seemed odd to me as my example code was short and sweet and I didn’t see how I could be accidentally triggering the same event on the same element multiple times. I tried using .off()
and .unbind()
to clear it but it wasn’t working. Obviously there was another reason for my particular issue and once I reminded myself the code was executing the way it was supposed to and the problem was me, I could look at it differently. It turns out my root cause is very different from some others.
One of the goals I’m trying to achieve for responsive interface is that when a user clicks on an element, that element comes to the forefront. This interface will have multiple dives and they will be draggable and resizable. Given the limited real estate, I want the user to be able to enlarge an element to see more data and I want to make sure that despite it’s position in the code, it can be on top of everything. Because I wanted it to work for anything they clicked on, I used * to select my element: $("*")
When it wasn’t working I started writing to the javascript console to help me debug. I noticed I kept getting 3 events fired for the one mousedown event:
I entered the if and id = undefined
I entered the if and id = square
I entered the if and id = undefined
This was the point I started googling and not getting anywhere. Then, when I had another episode of my recurring “The problem is my thinking and not the code executing incorrectly” epiphanies (I wonder how many of those I’ll have in a lifetime), I asked myself “How many elements are ACTUALLY in the point that I clicked?” Up until this point my unconscious assumption is that when you an event fires (mousedown, click, mouseup, mouseover, etc) it’s only acting on the element you see, i.e. the topmost element. Well it turns out that is not the case! It’s actually firing on everything along the z access of the website! Even though I was clicking on <div id=”square”></div>, that div was sitting inside of and body is inside of “document.” Or at least I’m assuming “document” (as in the page itself) is the third element the event is firing off of. To test my theory I put a wrapper div around the “square” div and low-and-behold, the event fired 4 times!
So I learned my lesson, don’t be lazy and use a wildcard. Not that I would have done it that way in the final product; I was just creating a proof of concept. Thankfully I only spent about an hour on this so that’s not much lost time. There are a few ways I can go about making a better element selector. I can either set an ID for each element and start them all with the same prefix like id=”onTopSquare” and id=”onTopCircle” so that I can select just the ones I want to be able to set on top by doing $("[id ^= onTop]")
. Or, and maybe my preferred way, add a class to all the elements I want to be able to do this to and use something like $(".onTop")
. I definitely don’t want to just use $("div")
because I almost always have nested divs so it will always fire multiple times!
I hope this helps someone since I didn’t get a good explanation from any of the posts I stumbled upon!
Just had an issue crop up where a web tool we created for data gathering could no longer query AD for membership groups of a computer. It only seemed to be happening on new machines added to the enterprise; old ones continued to work fine. After checking with the team that manages AD to see if they changed anything we were stumped. Couldn’t find anything wrong anywhere with permissions or account locks or anything else.
My C# code was using the FindOne method for DirectorySearcher. All of our hostnames are unique and we only have one domain so how could that be a problem? Well, it turns out something has started causing DNS entries in AD (which I wasn’t aware that we were even using that in AD… DHCP and DNS is actually managed by another product) to return back before computer objects. Perhaps the folks imaging and adding machines to the domain have changed a workflow. When querying the older PCs the computer object gets returned first.
Needless to say my DirectorySearcher now asks for a computer object class. So always make your filters as detailed as possible! (I should know this already….)
If anyone finds this and wants to know if multiple objects are being returned you can use the following powershell code (after importing the ActiveDirectory module):
Get-ADObject -LDAPFilter “(name=HOSTNAME)”
I know, this title is pretty vague and can cover a lot of different causes but this one didn’t stand out to me at first and I thought I would share it.
Got called by a user at one of the facilities who could authenticate when logging into the machine but all network locations he kept getting the generic resources does exist or you don’t have access error. For example, we redirect our user’s desktops and favorites to a location stored out on a netapp share. When he would login he would get the error above because it was trying to load his desktop but couldn’t.
His credentials were fine and double checking the permissions on the folder showed they were fine, too. After removing his local profile (we don’t use roaming) it still persisted. Odd. Well, it turned out he had domain credentials stored in Credential Manager that were for a service account he had been testing earlier. Don’t get me started about how they got stored… After removing the stored credentials from credential manager everything was fine.
I just find it odd that credential manager isn’t tied to the profile and so persisted after removing the profile. On second thought, I guess it could be because the vault stores passwords and associates them to your SID and on a domain that remains the same even when a local profile is removed since it gets the SID from your domain account.
I decided to be an early adopter of an OS for once and upgraded to Windows 10 the weekend after it released. It’s not completely awful but it’s also not without it’s headaches. One thing I wanted to post and share was this little bit of info since there doesn’t seem to be anything on the internet about it. After upgrading RDP stopped working and reason why is because of a really awful automatic decision the OS is making for you.
After the upgrade and I connected to my home network again (“for the first time”) it asked if I wanted to allow discovery of other items on the network. I said no. Well, apparently when you say no it automatically sets the category for that network connection to public and Windows won’t allow RDP connections if it’s on a public network. Why it automatically assumes this based on discovery is beyond me. It’s a really dumb move on Microsoft’s part. They should ask you how you want to classify like it did in Windows 7 and 8.
To fix it you have two options:
Manually edit the registry settings under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\NetworkList\Profiles\{Whatever GUID you have} and change Category to “1” and Category Type to 0 (that’s what mine currently are and it’s working again.
Or, you can remove the connection (I deleted the GUID key mentioned in the location above) and let it prompt you again. You may have to disable/enable the network adapter before it will do this. You have to tell it to allow discovery when it asks about the network again and it will set it to a Private network.