There is no individual ownership when you are part of a team, it's the sum of the parts that makes you the RESILIENT team you need to be.
I'm no guru, but I hope that what I find useful will help someone else. Enjoy.
So, it's not an uncommon thing in a large organization, especially where there are consultants or professional service engagements where there is a congregating effect on the C:\Users folder where users come and go, but the data remains. If we're looking at a Jump Server (a server that is local to a collection of other servers) often you'll find that that folder will have an archive of useless folders for the ancestors of the organization and when your C: drive is running low on space determining who is no longer with us, BillG rest their LinkedIn profile.
So, I got to thinking about how to make this simple, automated, and painless. How to report, or even better, clear the folders of C:\Users folder of old accounts. Let's fire up an Interactive Scripting Environment like Windows PowerShell ISE:
Based on Active Directory and Powershell's ability to converse in fluent AD the Get-ADUser commandlet allows us to get an active user list:
$ActiveUsers = $(Get-ADUser -Filter *).SamAccountName
Now, sometimes the commandlet needs some help, a Windows feature to be installed, so if this doesn't work try this:
Import-Module ServerManager
Add-WindowsFeature RSAT-AD-PowerShell
That should do the job and we can move on.
Try it. Copy/Paste or enter the following into the ISE and hit F5 (run):
$activeUsers = $(Get-ADUser -Filter *).SamAccountName
$activeUsers
If you're on a computer that is an Active Directory member Windows system, this should produce a list of user names, short names. This process causes the biggest delay, especially if the domain controller is over a WAN connection.
The next thing we need is a list of folders in C:\Users. This is a snap using:
$userFolders = $(Get-ChildItem -Path C:\Users).BaseName
You can try that too, but remember to add the line [$userFolders] line to display the output, then hit F5.
Note: To return to the Script view in the ISE, hit CTRL-R.
We now have the basics of the data we need to do the work.
While defining variables is always a best practice, and I mostly do this, there's also value to adding some exclusions. Some folders need to be ignored, left alone for other purposes, so we'll ignore them using the variable $Exceptions:
$current = @() # current users (keep)
$former = @() # the dearly departed (to be removed)
$Exceptions = @()
$Exceptions += @("Administrator")
$Exceptions += @("Public")
We'll need to spin through the list of folders, likely the shorter list is the folders collected into the userFolders array. We'll use a foreach loop to do that:
foreach($folder in $userFolders){
We'll need to ignore the exceptions noted earlier using the contains operator:
if(!($exceptions -contains $folder)){
We can re-use the logic to evaluate which accounts are current or former users too:
if($activeUsers -contains $folder){
$current += @($folder)
} else {
$former += @($folder)
}
write-host "Current Users' Folders:"
$current
write-host "`nNon-Current Users' Folders:"
$former
Or, perhaps we can script the removal of the user profiles using more PowerShell scripts. You will need to run the script under an account with sufficient AD rights, but you can work out whether that's a super-special service account or just something you do once a month, perhaps on the first of the month.
You can consider your next steps but here's some reading material (no warranties on this, it's not my site or post on Spiceworks.com):
$current = @()
$former = @()
$Exceptions = @()
$Exceptions += @("Administrator")
$Exceptions += @("Public")
$ActiveUsers = $(Get-ADUser -Filter *).SamAccountName
$UserFolders = $(Get-ChildItem -Path C:\Users).BaseName
foreach($folder in $userFolders){
if(!($Exceptions -contains $folder)){
if($ActiveUsers -contains $folder){
$current += @($folder)
} else {
$former += @($folder)
}
}
}
write-host "Current Users' Folders:"
$current
write-host "`nNon-Current Users' Folders:"
$former
Active Directory Glossary - Terms and Fundamental Concepts - Active Directory Pro
Two down, one remains.
A work thing that we ran into is that Google sometimes updates its root certificates. This can cause some panic on the part of some because they need to keep these up-to-date to facilitate inter-application communication.
So, to that end, I figured it was time to get proactive in a manner by detecting an update so we can at least tell people this particular change has occurred and how to deal with it. The next step might be to actually update the file where it need to live on the system, but let's not get ahead of ourselves.
I work in a hybrid environment and the idea was that we can build this to run as a service or a scheduled job (preferred) is just a given. It's less complicated on a Linux-based system so one solutions is written in bash, but many of our systems run on Windows Server so I figured PowerShell was also wise.
My first attempt was PowerShell and that does the job, but for parity I wrote the bash version to product the same output.
I'm modelling this on a Windows 10 system with the Ubuntu WSL2 for the bash shell. The intent is the script can live in a central location and you'd run it in a folder to capture and store the data/logs.
The command lines:../pemWatch.sh sh-google.com https://pki.goog/roots.pem
..\pemWatch.ps1 ps-google.com https://pki.google.com/roots.pem
Yes, there is a reason why the URLs are different, it seems PowerShell handles a redirect better as the URL delivers this when I use curl under Linux.
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
<A HREF="https://pki.goog/roots.pem">here</A>.
</BODY></HTML>
I guess things change on the Internet ;-) ...
As I have said before, and I will always remind you, I'm not a scripting guru, I just find a way.
Let's step through the bash script first:
#!/bin/bash
null=""
if [ -n $1 ]
then
name="$1"
else
name=default-test
fi
if [ -n $2 ]
then
url="$2"
else
url=https://pki.goog/roots.pem
fi
This accepts the command-line arguments and stuff in default if they're not present. Theoretically this could be useful for any monitoring of a file you need to pay attention to out there, so...
function main {
rep=.
with=-
name="${name//$rep/$with}"
lastMd5=$(cat ${name}.md5)
md5=$(curl ${url} | md5sum)
md5x=$(echo ${md5} | cut -d' ' -f 1)
if [[ $lastMd5 != $md5x ]]
then
echo $md5x $lastMd5
echo $md5x>$name.md5
curl ${url} --output $name.root_pem > /dev/null 2>&1
triggerAlert
else
echo $name :: $md5x
fi
}
The main Function, and I do like the structured coding of functions, handles reformatting the name to be file-safe (replacing and periods with a dash). We move on to loading the last MD5 value from a file for reference as lastMd5 then we collect the most recent version of the file and collect the MD5 on that into the variable, md5.
We need to tidy of the value by trimming the tail off it and trowing it into the variable md5x and we get into the comparison of it all. The key to this is the simple comparison of the lastMd5 variable with md5x, if they don't match the file has changed. If the file has changed we log it on-screen and into the $name.log file using the function triggerAlert.
function triggerAlert {
echo "Alert!"
date=$(date '+%Y-%m-%d %H:%M:%S')
echo "ALERT:: $date :: $lastMd5 => $md5x" 1>> $name.log
# add any advanced handling here
}
This function, at this time, simply writes a log of the event but it could send an email or potentially trigger some webhook into something more meaningful.
The script wraps up with a call to main because the functions need to precede their call.
# -------------------- MAIN
main
You will see the parallels here in PowerShell to how the bash script was built. First we handle the command-line parameters.
param(
[Parameter(Mandatory = $false, Position = 1)][string]$name = $null,
[Parameter(Mandatory = $false, Position = 2)][string]$url = $null
,[Parameter(Mandatory = $false)][switch]$reset = $null
)
As before the main function is where the bulk of the action happens. We ensure the parameters are present and valid, we normalize the name, then fetch the file into the variable $dl (download). We then call a function I found on the Internet to perform an MD5 hash on the file while in memory.
function main{
if($name + "" -eq ""){
$name = "google.com"
}
if($url +"" -eq ""){
$url = "https://pki.google.com/roots.pem"
}
$name = $name.replace('.','-')
$name = $name.replace(' ','-')
$name = $name.ToLower()
$dl = Invoke-WebRequest -Uri $url -UseBasicParsing
$md5 = Get-StringHash $dl
if(Test-Path -Path $($name + ".md5") -PathType Leaf){
$lastMD5 = Get-Content -Path $($name + ".md5")
} else {
}
if($md5 -ne $lastMD5){
Set-Content -Path $($name + ".root_pem") -value $dl
Set-Content -Path $($name + ".md5") -value $md5
if($lastMD5 + "" -ne ""){
triggerAlert $name $url
write-host "ALERT! Checking on $name..."
} else {
write-host "New! No alert on $name..."
}
}
if($reset){
if(Test-Path -Path $($name + ".root_pem") -Pathtype Leaf){
$devNull = Remove-Item -Path $($name + ".root_pem") -Force
}
if(Test-Path -Path $($name + ".md5") -Pathtype Leaf){
$devNull = Remove-Item -Path $($name + ".md5") -Force
}
} else {
Set-Content -Path $($name + ".root_pem") -Value $dl
}
write-host "$($name.padRight(25,' ')) :: $md5" -ForegroundColor White
}
There's a reset function that I haven't added to the bash script that resets the collected files but that's an extra. The function below, Get-StringHash, will return the MD5 hash for use.
Function Get-StringHash
{
param
(
[String] $String,
$HashName = "MD5"
)
$bytes = [System.Text.Encoding]::UTF8.GetBytes($String)
$algorithm = [System.Security.Cryptography.HashAlgorithm]::Create('MD5')
$StringBuilder = New-Object System.Text.StringBuilder
$algorithm.ComputeHash($bytes) |
ForEach-Object {
$null = $StringBuilder.Append($_.ToString("x2"))
}
$StringBuilder.ToString()
}
Just like the bash script the following should be customized to meet your needs. Right now it writes a log just as the bash script does.
Function triggerAlert{
param(
[Parameter(Mandatory = $true, Position = 1)][string]$name = $null,
[Parameter(Mandatory = $true, Position = 2)][string]$url = $null
)
#Send Email
#Trigger Zabbix
#Write to Log
$logData = $(Get-Date).ToString('yyyy-MM-dd HH:mm:ss') + " :: " + $md5.PadLeft(20,' ')
if(Test-Path -Path $($name + ".log") -PathType Leaf){
Add-Content -Path $($name + ".log") -Value $logData
} else {
Set-Content -Path $($name + ".log") -Value $logData
}
}
And we cannot forget to call main.
main
I hope this helps you see the parallels of scripting languages, that while we may need to learn the nuances of different languages the logic can often remain the same.
My scripts, shared. Also, about that remaining one, I'll do this again in Python, or perhaps even Java or Rust, all languages I'm not terribly familiar with yet.
So, one of the things that's a fact of life is that we use INI files for configuration of some of our software, and comparing these files, or in my case ensuring certain values are present. It's been a long-term goal that I find a way (build a tool) to do the heavy lifting.
This is the result (right)...
At the point this screen capture was taken the output was to screen alone. I've added in a text-based report and an html format report too at work, a challenge in itself. The real trick with this is that I use a template, an INI file, to configure what is matched or compared for the outcome report as shown.
Using the template as the configuration tool for this means that the user has the control and they can customize based on their requirements.
This was my original plan, to offer matching (m:), compare (c:), and information (i:) but I dropped the information option as compare seems to serve the purpose. I haven't found a serious need for case sensitivity. That may change.
For now the features have been set and the coding is done. Okay, done is a strong word I can already think of a few tweaks. I am the captain of Scope Creep. Software is never done, it's just released. :-)
The key to this script was arrays. Okay, arrays and hash tables. The first task though was to load each of the found files into hash tables. To do this I prepended the [section] headers to the keys as listed and filtered out the blanks, comments and non-compliant lines:
Which cleans things up and makes it usable as a hash table.
There's another array I capture, the list of all of the unique section.key sets from all of the INI files so I know what all of the available options are. Technically this also captures the section.key sets from the template file too so if the set I'm looking for is missing from all files I'll have it in the list.
I read the template file into a hash table for compliance and an array for comparison. The template as interpreted looks like this:
To generate the report I will spin through the hash table of the template settings looking and for compliance comparing each entry, per file, to the master. This worked well to display the match success (Y/N) in columns. That was tossed out when I got to the comparisons. The other thing I tossed out was matching whole sections by specifying the m: designation on a [section].
I added switches to enable filtering/specifying what files we use for
the INI sources (Customer/Environment) and for which template to use
considering different teams at work will have different needs. It will
use a default template.ini if not specified, which is also customizable.
vini.ps1 -CustID <xxxx> -EnvID <test> [-template test.ini]
Oh, there's one more:
[-interactive] - which displays all the "work" that went into the results. Which is mostly what I display above.
Validate INIs: vINI - My Cousin vINI?
How would you write this script? Don't tell me, just think about it. The methods I used I considered for this took some time, as well as how to present the report. Sure, text was the first step so I had data could validate, but an on-screen output has limited play. I used Powershell but there are other options of course. Generating the data on screen I took the time to use colour so it was easy to read. The text file doesn't use that of course, but the HTML file uses colour and CSS styling with colour.
The last thing I did was to send an email to the person executing the script, the text file and the html file are attached, but the email format uses the same HTML styling for continuity and readability. Sorry, I don't have an example handy to add here.
There is no individual ownership when you are part of a team, it's the sum of the parts that makes you the RESILIENT team you need to be.