About Jon Bryan

Posts by Jon Bryan:

Entra ID API Based Inbound Provisioning and Complex Attribute Flows

I’ve been setting up Entra ID and Active Directory API based inbound provisioning in my demo environment recently, using the PowerShell method described here API-driven inbound provisioning with PowerShell script – Microsoft Entra ID | Microsoft Learn. I’ll split this post into 2 parts and will focus only on Entra ID API Inbound Provisioning, as there are only a few differences in AD provisioning – e.g. different attributes and setup of the agent/ CloudSync:

Post setup: Customisation, Lessons Learnt and Troubleshooting

I carried out the initial setup of the enterprise app and manage identity according to the above-mentioned documentation.

Using the example files from the GitHub repo: https://github.com/AzureAD/entra-id-inbound-provisioning/tree/main/PowerShell, I created my own “HR” data file, one line of which is shown here:

I wanted to consume the employee HireDate, LeaveDate, Pronouns, TempLeave and UsageLocation attributes too. However, as these are not part of the default SCIM user definition, I extended my Attribute Mapping file with these extra mappings, using my domain as the identifier “urn:ietf:params:scim:schemas:extension:oholics:2.0:User“. The full mapping file is here:

To consume these custom attributes, you must add them to the Enterprise Application, in the Provisioning section. Tick “Show advanced options” and select “Edit attribute list for API”, add as shown:

Note that the PowerShell script has functionality to automate this process based on the headers of your CSV file, but given that there were only 4 attributes to define, I did it manually.

OK, on to running the PowerShell commands to see the results:

First import the Attribute Mapping file:

$AttributeMapping = Import-PowerShellDataFile .\AttributeMapping.psd1

Then validate the Mapping file and the input CSV file:
.\CSV2SCIM.ps1 -path .\UserSample.csv -AttributeMapping $AttributeMapping -ValidateAttributeMapping

Run the command to process the users in the CSV file (I suggest doing one user at a time until you are confident in your configuration):
.\CSV2SCIM.ps1 -path .\UserSample.csv -AttributeMapping $AttributeMapping -TenantId <MyEntrIDTenantID> -ServicePrincipalId <ObjectIDOfEnterpriseApplication>

Within a few seconds you should see that a new user is provisioned into Entra ID, with all the attributes set. OK all good! If not, check the provisioning log to see some ‘issues’ 😉

Note that my example CSV file has diacritics, as I wanted to see how the application dealt with these. A few of my initial runs (single user provisioning) ran without issue, generally these were those without diacritics, but then I had a few issues that had me stumped for a while.

The first error was presented in the provisioning log as:

The primary cause of this error was the format of the CSV file. Make sure that the file is in UTF-8, flipping between UTF-8 and ANSI results in: mastná : mastnÃ

Additionally, I had to modify some flow rules to remove diacritics – notably Mail, MailNickName and UserPrincipalName; these are detailed in the next section.

The second error was presented in the provisioning log as:

 

 

 

 

 

 

 

 

 

This one was a lot more annoying, but the solution was of course very simple! The error provided no clues, the other provisioning logs in Entra ID did not yield anything useful, so I started picking through each attribute, using a known good example user, who I’d already provisioned. By outputting a json file for the users (good and bad) using the command:

.\CSV2SCIM.ps1 -path .\UserSample.csv -AttributeMapping $AttributeMapping > UserSample.json

I then picked through/ copy and pasted the values between the good and bad user json file and submitted them directly using Graph Explorer – see here for details: https://learn.microsoft.com/en-us/entra/identity/app-provisioning/inbound-provisioning-api-graph-explorer

After exhausting all ‘normal’ attributes, I copied the hiredate and leavedate from my good user example…. it worked!! WTH??? So, what was the difference?

The dates in my good user file happened to have the format 04/03/2011, the dates in my bad user file had the format 19/04/2011, so what is the problem?? The application expects American date formatted dates! So in my good example, the date can be read as UK or US, but in the bad user, the date is clearly UK style. That was painful, especially as after every submission (even using Graph Explorer), you have to wait for a few minutes for the success or failure message to appear.

An example of the output for my test user (as provided in the sample CSV file) is shown (this one is fixed after debugging the issues with date etc.):

Defining more complex attribute flows

The default flows that are provided after configuring the application are OK, but kind of generic and miss some other attributes that you’d typically want to use/ populate (e.g. Usage Location).

Looking at the MSFT documentation https://learn.microsoft.com/en-us/entra/identity/app-provisioning/functions-for-customizing-application-data shows the general rules of the different expressions, but doesn’t provide much guidance on how to combine expressions to meet more complex requirements. I dug around, but couldn’t find any good examples, so I’ll provide some here.

My initial attempt at defining a user Display Name, built from a set of attributes which were present took a while.

Example: Display Name should be: <FirstName>” “<Surname>”, “<Pronouns>”, “<Department> 

So the flow needs to check that the attributes are present, else you may end up with a Display Name that looks like: FirstName Surname,, Department

Initially, I tried using the Switch(IsPresent) expression, but kept getting null results, as it only seemed to evaluate the first attribute. I moved onto nested IIF, but again did not get the results that I wanted.

Digging through the issues on the GitHub page https://github.com/MicrosoftDocs/entra-docs/issues/120, I noted reference to https://learn.microsoft.com/en-us/entra/identity/app-provisioning/hr-user-creation-issues, where the use of IgnoreFlowIfNullOrEmpty was suggested. Initial testing of this method was good, with the following resultant flow rule, which joins the attributes only if they exist.

The green section takes the FirstName and Surname and joins them with a Space

The blue section Joins the green section, the Pronouns and the Department with a Comma

Join(“, “, Join(” “, IgnoreFlowIfNullOrEmpty([name.givenName]), IgnoreFlowIfNullOrEmpty([name.familyName])), IgnoreFlowIfNullOrEmpty([urn:ietf:params:scim:schemas:extension:oholics:2.0:User:pronouns]), IgnoreFlowIfNullOrEmpty([urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department]))

Some more examples:

Email address/ UserPrincipalName:

The green section takes the FirstName and Surname trims any spaces and sets to lowercase (HR Data cleansing) and joins them with a Space

The blue section Appends the domain name to the green section

Append(Join(“.”, Trim(ToLower(NormalizeDiacritics([name.familyName]), )), Trim(ToLower(NormalizeDiacritics([name.givenName]), ))), “@oholics.net”)

 

MailNickName:

The green section removes diacritics from the userName and sets to lowercase (HR Data cleansing), where userName is defined in the input file as firstname.surname (with diacritics)

The blue section removes any suffixes/ characters after an @ symbol.

Replace(ToLower(NormalizeDiacritics([userName]), ), , “(?<Suffix>@(.)*)”, “Suffix”, “”, , )

 

Displayname, with a nested IIF statement:

The green section takes the FirstName and Surname and joins them with a Space

The blue section Joins the green section, the Department and the orange section with a Comma

The orange section creates a “(C)” if the user is a Contractor and an “(E)” if the user is an Employee. If the data is missing or not one of those values, then that section of the display name is omitted.

Join(“, “, Join(” “, IgnoreFlowIfNullOrEmpty([name.givenName]), IgnoreFlowIfNullOrEmpty([name.familyName])), IgnoreFlowIfNullOrEmpty([urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department]), IgnoreFlowIfNullOrEmpty(IIF([userType]=”Contractor”,”(C)”,IIF([userType]=”Employee”,”(E)”,””))))

 

Depending on how happy you are with your input HR data, you could go a bit crazy with data hygiene. From my FIM/ MIM days, I got burned so many times with bad HR data, I got in the habit of always performing hygiene on my import flows. That’s all for now.

Can I copy an Entra ID Role? No!

I did this, so you don’t have to 🙂 I’d read that it wasn’t possible, but had to see what happened out of interest!

I tried to copy the Exchange Administrator role, by taking the existing permissions of that role and squirting them into a POSH script to create a custom role.

The result: many errors like shown stating that the action is not supported on a custom role.

Below is a copy of the script showing all of the permissions that I had to remove (note all the commented permissions lines) before being able to create the role.

For reference here is a link to MSFT docs that show what you CAN set in Entra ID custom roles: User management permissions for Microsoft Entra custom roles – Microsoft Entra ID | Microsoft Learn

Firewalla – Allowed IP Addresses for Ring Camera Devices

My initial setup of Ring Camera’s with my Firewalla was pretty lacklustre! They were isolated in a device group from all other networks, but had free outbound access to the internet. So, at first I got a few alerts for domains like ring.com and added these as allow rules. Everything was generally good, but I constantly got “unusual upload” alerts on the Firewalla for my Ring cameras, these were always Ireland based IP’s associated with Amazon. Each time I got an alert, I added it to the mute/ exclusion list, but this was burdensome!

Then as mentioned in my last post (https://blog.oholics.net/s3-amazonaws-com-dns-resolution-and-firewalla/), I started locking stuff down, including my Ring cameras, following the same process as I used for my PiHole. I googled “allowed IP addresses for Ring cameras” previously and got the gist that there is no easy way.

After the success I had with my previous use of Target Lists on the Firewalla , I looked to use the same approach for this issue. After locking down the device group, I noted that the target IP’s were nearly all West EU based IP’s for the AMAZON service, so needed to add some more arguments to my jq query – I needed the ranges for the AMAZON service in eu-west-1, eu-west-2 and eu-west-3. To do so, I used the test argument, as follows:

curl https://ip-ranges.amazonaws.com/ip-ranges.json | jq -r ‘.prefixes[] | select(.region|test(“^eu-west.”)) | select(.service==”AMAZON”) | .ip_prefix’

I added those ranges (574 in total) to 3 Firewalla Target Lists, each can hold a maximum of 200 CIDR ranges.

Then, I created rules to allow traffic from the Ring Camera’s group to the IP ranges in these Target Lists.

Then, after a few days I checked to see what was still being blocked and noted a handful of East US based Amazon IP’s, some were EC2 Service ranges, so  I grabbed them with:

curl https://ip-ranges.amazonaws.com/ip-ranges.json | jq -r ‘.prefixes[] | select(.region|test(“^us-east.”)) | select(.service==”EC2″) | .ip_prefix’

Then I noted that some ranges were from the AMAZON service, so I grabbed them too:

curl https://ip-ranges.amazonaws.com/ip-ranges.json | jq -r ‘.prefixes[] | select(.region|test(“^us-east.”)) | select(.service==”AMAZON”) | .ip_prefix’

However, after I had the files containing the ranges, I realised that there was some duplication between the sets; some Amazon services share IP ranges, where the AMAZON service list covered EC2 as well. I added these ranges to new Target Lists using the Firewalla web interface.

Back on my phone, I added rules to allow traffic to the IP ranges in these allow lists:

And then added those same Target Lists to the mute list for Abnormal Uploads, targeting the Ring Cameras group only.

 

 

 

 

 

 

 

Since I made these changes, I’m no longer seeing any blocked outbound traffic from my Ring cameras or any alerts relating to Abnormal Uploads 🙂

s3.amazonaws.com DNS Resolution and Firewalla

A few years ago I bought a Firewalla Gold device to help provide some additional protection to my home network, but more importantly to control what my kids had access to and when they had internet access.

My initial setup was fairly basic, but did the job. I’ve also been running PiHole and Unbound on a Raspberry Pi Zero 2, which has been very effective at black-holing advertising and other undesirable DNS traffic.

Over the Christmas/ new year holiday, I got hold of a Cisco switch which was capable of VLAN tagging, so set about having a bit of a tidy up/ isolation of devices/ networks.

I made the following changes:

  • Created a VLAN for the PiHole, the Zero 2 is hardwired via an Amazon Firestick network adaptor.
  • Created a device group on the Firewalla, which only contains that Zero 2.
  • Allowed incoming port 53 connections from all internal networks/ VLANs to the Zero 2.
  • Blocked all inbound and outbound traffic from the PiHole VLAN.
  • Permitted port 53 outbound traffic (public) from the Zero 2.

Now the network is isolated, I updated gravity on the PiHole to see what is now blocked, then added allow rules for all desired traffic.

One allow rule added was for s3.amazonaws.com as two of the blocklists were hosted in S3 buckets. That is the back story, now onto the problem and my solution.

The problem: When Gravity ran on the PiHole, it always failed to get the two blocklists that were hosted on s3.amazonaws.com, even though I had an allow rule on the Firewalla for that domain.

Why: Amazon services are massive and widely distributed! Resolution of DNS will provide a different set of IP’s every time you try. The image below shows my Pihole on the left and the Firewalla console on the right:

The Firewalla was supposed to learn the IP’s of the domain to honour the allow rule. However, after making all the changes requested in my support ticket with Firewalla, the issue remained. Note that some IP’s were trusted and (most) others were not, see:

 

So the fix… Initially I figured that I would just pick a “trusted” IP and add it to the PiHole local DNS, that worked but wasn’t very resilient! So, then I looked at how I could tell Firewalla to trust all IP’s associated to the S2 namespace.

How to do that? Amazon provide a good article on identifying the IP addresses used for their services here: https://repost.aws/knowledge-center/s3-find-ip-address-ranges. Using this method, I obtained 298 IP ranges in CIDR format. I then added these to Firewalla Target Lists, via the browser interface.

Then, back on my phone, added an allow rule for each set of Target Lists:

Now I can freely get the block lists from any s3.amazonaws.com IP address 😉

Sure I have added a lot of “trusted IP’s”, a lot more than planned, but these rules apply only to that device and are outbound only.

Creating Azure AD Service Principles and Managing Roles

On a recent project, I needed a reliable and repeatable method of creating Azure AD service principles for use with Azure DevOps and Azure Sentinel, among other things. I also needed to apply Azure roles to these service principles at different levels of the hierarchy, be that root management group, sub management group or subscription. All examples assume that the az module is already installed.

Create the service principle:

For the SP that I created for the DevOps team, I needed to give it the Owner role at the root level:

New-AzRoleAssignment -ObjectId “<ObjectIDOfSP>” -Scope “/” -RoleDefinitionName “Owner”

Additionally, I discovered that if you delete a SP prior to removing its roles, you end up with orphaned references in the resource level role assignments. Where these were inherited from the root level and I had no GUI visibility of that level, I had to use PowerShell to tidy up. Assuming that you don’t have a record of the ObjectID of the deleted SP, get all role assignments with:

Get-AzRoleAssignment | Select-Object -Property DisplayName, ObjectID, RoleDefinitionName, Scope

Find the object whose scope is “/” , role is Owner and has no DisplayName. That is the orphaned object, grab the ObjectID and remove with:

AzRoleAssignment -ObjectId “<ObjectIDOfSP>” -Scope “/”

Scripted Provisioning of Office 365 Unified Labels

I’ve recently been working on a project implementing O365 unified Labels, in a greenfield scenario where programmatic provisioning of the configuration was required.

Some of the Microsoft documentation covering how to configure Unified Labels via PowerShell are good, while others are very weak. Take for example the Set-Label cmdlet with the -examples switch:

Se-Label -Examples

OK, that isn’t very helpful 🙂

Additionally, the online docs (e.g. https://docs.microsoft.com/en-us/powershell/module/exchange/policy-and-compliance/set-label?view=exchange-ps) fail to reference some of the cmdlet parameters .

If we look at the parameters from the command “Get-Help Set-Label -Detailed”, we see:

Set-Label-Parameters

So the parameters that I wanted to set were LabelActions and Conditions. LabelActions configure headers, footers and watermarks, while Conditions define the O365 Sensitivity Types that are applied to a label.

The documentation for how to do this was non-existent, apart from some cryptic “Exchange” docs detailing how to define “MultiValuedProperty”, I was fairly stumped. I ended up up backwards engineering the configuration by setting it in the GUI, then capturing the Label in PowerShell. Once captured, look at the configuration of “Conditions” or LabelActions to see how those Properties are defined in the case of Unified Labelling.

The following script details how this all works together to create something useful. It provisions a new Label named “My Label”, with a green colour. Then it applies a header “HeaderText” and footer “FooterText” and then a watermark “WatermarkText”, all in Black and font size 10. Lastly it applies the O365 sensitivity types “ABA Routing Number” and “Argentina National Identity (DNI) Number” to the label, in Recommended mode.

Also covered for reference is the creation of a sub-label “My Sub Label”, beneath “My Label”.

Once the labels are defined, we need to publish them with a policy. First create the policy, providing the label names and scope, then apply any required advanced settings to the policy.

Note the script below assumes that the last session was ended, we need to login again – else just continue the previous session.

Finally, the documentation states that label priority follows this rule: “A lower integer value indicates a higher priority, the value 0 is the highest priority“. However, in practice the opposite is true.

Say for example you have the following labels “Public”, “Internal” and “Secret”; for the advanced setting “RequireDowngradeJustification” to apply as expected, (following the documentation) you would set “Secret” = 0, “Internal” = 1 and “Public” = 2. This actually has the opposite effect, making a downgrade from Secret to Public not raise the justification dialog box, while Public to Secret is classed as a downgrade; also the order of labels in the toolbar is the wrong way around. So the proper order should be: “Public” = 0, “Internal” = 1 and “Secret” = 2.

Additionally, the priority can get quite messed up if you have any existing labels or if you deploy the labels in the wrong order. Continuing from my example, but also throwing in 2 sub labels per top level label….

First connect (or continue the existing session), then get the current priorities. If they don’t match the output shown in the script, then start fixing them! Start by interactively running the priority settings for the top level labels (only do those that are not correct), starting with the highest values and working down. Check the priorities after each change.

Once the top level labels are correct, start fixing the sub labels (assuming they are not right). Reset them individually, again setting the highest value first, check the priorities after each change. Rinse and repeat until the order is as desired, then go have a G & T 🙂

My Very Own CA, How Sweet :)

Just splurging this down here for next time, as I had to go trawling for this info in various old text files today… Very related to https://blog.oholics.net/creating-simple-ssl-certificates-for-server-authentication-using-openssl/, but using my own CA rather than an enterprise or public CA.

I was working in my lab today to setup SLDAP on my lab domain controller. I was doing this to validate the syntax of ldapsearcher, on a Ubuntu machine, in different cases and also to see if I could determine the reason I was seeing a particular error (see https://blog.oholics.net/ldapsearch-syntax-for-simple-ldap-and-sldap/).

I want to KISS (keep things simple stupid), so was going to use the rootCA that I setup with OpenSSL a few years ago (running on my Windows machine).

Back then I ran the following commands to create the “top level” Root CA certificate and Private Key:

openssl genrsa -out rootCA.key 2048
openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem

To generate my domain controller certificate today, I used (with an edited openssl.conf file of course):

openssl genrsa -out dc.oholics.net.key 2048
openssl req -new -key dc.oholics.net.key -out dc.oholics.net.csr
openssl x509 -req -in dc.oholics.net.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out dc.oholics.net.crt -days 500 -sha256
openssl pkcs12 -export -out dc.oholics.net.pfx -inkey dc.oholics.net.key -in dc.oholics.net.crt -certfile dc.oholics.net.crt

On the domain controller, I installed the dc.oholics.net.pfx file into the computer personal store and the rootCA.pem into the computer trusted root certification authorities store. Reboot and done..

###################################################################################

Minor edit…. I originally created the root certificate a rather long time ago… Today I discovered it was expired, thus the few certificates issued by it are also fubared.

Simple fix (where I don’t have to publish anything very far or wide):

Regenerate the rootCA certificate using the original key:

openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 10240 -out rootCA.pem

Then start re-issuing those certificates that I was actually using (again using the keys and csr’s previously used:

openssl x509 -req -in MyImportantCert.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out MyImportantCert.crt -days 5000 -sha256

openssl pkcs12 -export -out MyImportantCert.pfx -inkey MyImportantCert.key -in MyImportantCert.crt -certfile MyImportantCert.crt

Note the extra 0’s on the number of valid days, shouldn’t have to do this again for a good while 🙂

Ldapsearch Syntax for Simple LDAP and SLDAP

Another case of “I’ve done this before, but never wrote it down”, so revisiting this took far longer than it should have. But now it is here, that won’t happen again.. right?? I’ll probably never need it again now… typical..

OK, so a straight forward non-secure ldapsearch command, obtains everything (-h can be IP or FQDN):

ldapsearch -h 192.168.1.201 -p 389 -b “DC=oholics,DC=net” -D “CN=svc-LDAPBind,OU=ServiceAccounts,DC=oholics,DC=net” -w “<MyPass>”

A secure ldapsearch command, using TLS on port 389, obtains everything (Note the use of the -Z switch and the use of FQDN):

ldapsearch -h dc.oholics.net -p 389 -b “DC=oholics,DC=net” -D “CN=svc-LDAPBind,OU=ServiceAccounts,DC=oholics,DC=net” -w “<MyPass>” -Z

A secure ldapsearch command, using SSL on port 636, obtains everything (note the use of -H and the LDAP Uniform Resource Identifier):

ldapsearch -H ldaps://dc.oholics.net:636 -b “DC=oholics,DC=net” -D “CN=svc-LDAPBind,OU=ServiceAccounts,DC=oholics,DC=net” -w “<MyPass>”

These commands all work just fine. Just for fun, make the last query type find something in particular – Look for a user account by its DN:

ldapsearch -H ldaps://dc.oholics.net:636 -b “DC=oholics,DC=net” -D “CN=svc-LDAPBind,OU=ServiceAccounts,DC=oholics,DC=net” -w “<MyPass>” “(&(objectclass=User)(distinguishedName=CN=John E Smoke,OU=Users,DC=oholics,DC=net))”

Now for some errors!

For both TLS and SSL on port 636, using the IP as the host (-h or -H) fails. It MUST use the FQDN of the target system. Why? because the certificate on the DC only refers to the FQDN of the server.

SSL/ 636 – The error “Can’t contact LDAP server (-1)” was really stumping me as there is little to go on in the error message. Doing a network capture, just shows the handshake start, but the DC ultimately just says “Go Away!”. It resets the connection attempt.

A few things learnt:

1. Using -h FQDN and -p 636 results in Can’t contact LDAP server (-1) (the URI method above must be used)

ldapsearch -h dc.oholics.net -p 636 -b “DC=oholics,DC=net” -D “CN=svc-LDAPBind,OU=ServiceAccounts,DC=oholics,DC=net” -w “<MyPass>”
ldap_sasl_bind(SIMPLE): Can’t contact LDAP server (-1)

2. Using -h IP Address and -p 636 results in Can’t contact LDAP server (-1)

ldapsearch -h 192.168.1.201 -p 636 -b “DC=oholics,DC=net” -D “CN=svc-LDAPBind,OU=ServiceAccounts,DC=oholics,DC=net” -w “<MyPass>”
ldap_sasl_bind(SIMPLE): Can’t contact LDAP server (-1)

3. Using -H with IP Address in URI and -p 636 results in Can’t contact LDAP server (-1)

ldapsearch -H ldaps://192.168.1.201:636 -b “DC=oholics,DC=net” -D “CN=svc-LDAPBind,OU=ServiceAccounts,DC=oholics,DC=net” -w “<MyPass>”
ldap_sasl_bind(SIMPLE): Can’t contact LDAP server (-1)

Additionally, for TLS connection. Using the IP address of the DC, resulted in a different, but much more helpful error message:

ldapsearch -h 192.168.1.201 -p 389 -b “DC=oholics,DC=net” -D “CN=svc-LDAPBind,OU=ServiceAccounts,DC=oholics,DC=net” -w “<MyPass>” -Z
ldap_start_tls: Connect error (-11)
additional info: TLS: hostname does not match CN in peer certificate

Also, where a Domain Controller has the setting “Domain controller: LDAP server signing requirements” set to Require signing. When trying to initiate an insecure LDAP query with ldapsearch, it fails as follows:

ldapsearch -h 192.168.1.201 -p 389 -b “DC=oholics,DC=net” -D “CN=svc-LDAPBind,OU=ServiceAccounts,DC=oholics,DC=net” -w “<MyPass>”
ldap_bind: Strong(er) authentication required (8)
additional info: 00002028: LdapErr: DSID-0C090257, comment: The server requires binds to turn on integrity checking if SSL\TLS are not already active on the connection, data 0, v2580

Well that was a fun day 🙂

Backup and Clear Domain Controller Security Event Logs

A post related to https://blog.oholics.net/logparser-loves-security-logs/, for Case 3.

If you don’t manage security logs by regularly backing them up and clearing them, you risk losing important historical information. Additionally, running a LogParser query against a large, unmanaged security event log takes a long time.

The below script is designed to be run daily at the end of the day to backup the security event log on a Domain Controller and then clear its contents. Additionally, the logs are archived off to two windows shares to allow for long term storage.

The script makes use of Jaap Brasser’s DeleteOld script (https://gallery.technet.microsoft.com/scriptcenter/Delete-files-older-than-x-13b29c09) to carry out tidy up operations of the local staging folder. In practice, I used the same script to manage the archive folders too, keeping 365 days worth of logs.

Usage: .\BACKUP_AND_CLEAR_EVENTLOGS.ps1 <DomainController> $clear

Make sure that the security event log maximum size is increased to a high enough level to ensure that none of the days logs get overwritten. Judging that size will depend on the number of events per day or alternatively just set to “do not overwrite events”.

Note: the event ID’s are purely made up 😉

Enumerate Azure Role Assignments

The following script can be used to enumerate role assignments for a subscription and role assignments for Resource Groups within that subscription.

Use as-is to just grab everything – note 2 subscriptions are used in the example – fix the subscription GUID’s on lines 6 & 7.

Optionally un-comment the references to -SignInName “Jon@oholics.onmicrosoft.com” to obtain a report showing only those resources that refer to the named user.

The resulting report can be opened in Excel, to product a nice table 😉