Code Sample

Scripted Provisioning of Office 365 Unified Labels

I’ve recently been working on a project implementing O365 unified Labels, in a greenfield scenario where programmatic provisioning of the configuration was required.

Some of the Microsoft documentation covering how to configure Unified Labels via PowerShell are good, while others are very weak. Take for example the Set-Label cmdlet with the -examples switch:

Se-Label -Examples

OK, that isn’t very helpful 🙂

Additionally, the online docs (e.g. https://docs.microsoft.com/en-us/powershell/module/exchange/policy-and-compliance/set-label?view=exchange-ps) fail to reference some of the cmdlet parameters .

If we look at the parameters from the command “Get-Help Set-Label -Detailed”, we see:

Set-Label-Parameters

So the parameters that I wanted to set were LabelActions and Conditions. LabelActions configure headers, footers and watermarks, while Conditions define the O365 Sensitivity Types that are applied to a label.

The documentation for how to do this was non-existent, apart from some cryptic “Exchange” docs detailing how to define “MultiValuedProperty”, I was fairly stumped. I ended up up backwards engineering the configuration by setting it in the GUI, then capturing the Label in PowerShell. Once captured, look at the configuration of “Conditions” or LabelActions to see how those Properties are defined in the case of Unified Labelling.

The following script details how this all works together to create something useful. It provisions a new Label named “My Label”, with a green colour. Then it applies a header “HeaderText” and footer “FooterText” and then a watermark “WatermarkText”, all in Black and font size 10. Lastly it applies the O365 sensitivity types “ABA Routing Number” and “Argentina National Identity (DNI) Number” to the label, in Recommended mode.

Also covered for reference is the creation of a sub-label “My Sub Label”, beneath “My Label”.

Once the labels are defined, we need to publish them with a policy. First create the policy, providing the label names and scope, then apply any required advanced settings to the policy.

Note the script below assumes that the last session was ended, we need to login again – else just continue the previous session.

Finally, the documentation states that label priority follows this rule: “A lower integer value indicates a higher priority, the value 0 is the highest priority“. However, in practice the opposite is true.

Say for example you have the following labels “Public”, “Internal” and “Secret”; for the advanced setting “RequireDowngradeJustification” to apply as expected, (following the documentation) you would set “Secret” = 0, “Internal” = 1 and “Public” = 2. This actually has the opposite effect, making a downgrade from Secret to Public not raise the justification dialog box, while Public to Secret is classed as a downgrade; also the order of labels in the toolbar is the wrong way around. So the proper order should be: “Public” = 0, “Internal” = 1 and “Secret” = 2.

Additionally, the priority can get quite messed up if you have any existing labels or if you deploy the labels in the wrong order. Continuing from my example, but also throwing in 2 sub labels per top level label….

First connect (or continue the existing session), then get the current priorities. If they don’t match the output shown in the script, then start fixing them! Start by interactively running the priority settings for the top level labels (only do those that are not correct), starting with the highest values and working down. Check the priorities after each change.

Once the top level labels are correct, start fixing the sub labels (assuming they are not right). Reset them individually, again setting the highest value first, check the priorities after each change. Rinse and repeat until the order is as desired, then go have a G & T 🙂

Delegating Group Management – Using the Lithnet FIM PowerShell Module

Within my AD structure, group management is delegated within certain OU’s, I now need to replicate that functionality in the FIM portal.

The is no real way of identifying which groups should be managed by whom, except the OU within which the group currently resides.

So, to start off with I need to get the parent OU of the group into the portal:

Import the OU into the MV:

Setup an export flow for adOU into the portal.

Then, by using the Lithnet PowerShell Module, we can create all the sets and MPR’s required, below is a sample for creating one delegated “collection”. In production, my XML file is much bigger – delegating group management to around ten different groups.

Note, that you first need to create references to all users who might be given the rights to manage groups. This includes the FimServiceAdmin and FimServiceAccount – referenced by their ObjectID, the others are referenced by their AccountName. All members referenced in this section, are added to the __Set:GroupValidationBypassSet. This set is defined in the non-administrators set – not in this set – this bypasses the group validation workflow:

AllNonAdministratorsSet

Create a set of groups to be managed – the filter being the OU that the groups belong to & MembershipLocked=False

Create a set of administrators for this delegation – adding the explicit members

Then create the two MPR’s to allow the members of the administrative set to manage those groups – the first MPR allows modification (Read, Add and Remove) of the ExplicitMember attribute, while the second allows creation and deletion.

Use Import-RMConfig -File <PathToXML> -Preview -Verbose to validate your xml and see what it would do. Drop the “-Preview” to make the change

An Alternative To Using The Generic Array From File Function

While looking to improve on my method of getting exceptions or a long list of mail suffixes into an array, to be checked during code execution, I came across this: https://msdn.microsoft.com/en-us/library/windows/desktop/ms696048(v=vs.85).aspx

This seemed to me to be a really nice solution, just defining all exceptions and suffixes within one file, read it in on code execution, then check for existence or whatever in the code.

So, given the following xml file:

Add the System.Xml Import and declare the variables, so they are global:

Add the code to read the xml file into the Initialize Sub:

Then, when you wish to look for those values within those variables – just like in the last post:

An Update on my Generic Array From File post

In this post: https://blog.oholics.net/a-generic-array-from-file-function-to-cope-with-inevitable-exceptions/, I documented a method of generating an array of values from a text file.

While I was happy that this method worked, I was not entirely happy with the fact that I still had some hard coded values in the code. However, the way that the function operated meant that if I took my collection of mail suffixes (20+) and added them all to the text file, then the array would be built for each and every user that passed through the dll, not too efficient!

So, I was looking for something a little more elegant. I was happy for the array to simply be defined when the dll was loaded.

Here is my solution:

At the beginning of my AD MA, I declare my dates and logging levels etc, then generate those arrays using the function. These arrays are now static and are good for processing all users without being regenerated.

When I wish to look into the array to validate a valid email suffix for example, I go from this (as in the last post):

To this:

Much cleaner – plus all suffixes can now just reside in a text file.

Note that updates to the text file will only be realised if the dll is reloaded and the array is regenerated. I believe that this is after 5 minutes of inactivity and seems to hold true from testing.

Process To Email The Manager Of A Service Account When Their End Date Is Approaching

A long term goal of mine, has been to get “account requestors” to take ownership of their Service Accounts.

Attempts have been made by my predecessors to record an owner of a service account, but it has simply been done as a string attribute of the AD object. Thus, when the person leaves and the account is deleted, the service account becomes orphaned, with an reference to a long forgotten ID.

So thinking of a way to carry this out….. I am already using the email address of the owner of an administrative account to make decisions about whether the administrative account should be enabled or disabled – based on the end date of the owner – discovered by looking up the email address in the MV.

I figured that I could do something similar for those Service Accounts. I’ll be creating service accounts via the portal, the owner of the account will be assigned to the manager attribute. So, how can I get the email address of the manager into the MV as a thing that I can lookup??? I can’t do an advanced flow rule on the FIMMA, and even if I could, Manager is a reference attribute, so I can’t do it anyway… I found an article about dereferencing another attribute, that get me going down this path….. The solution is simple. Create a new attribute and binding in the portal – “ManagerEmailAddress”, then setup a workflow as follows:

GetManagerEmailAddressWF

When the account falls into scope, the managers email address is set into that new attribute – in the sync engine create a direct flow to put that into the MV (I’m using “serialNumber” – for one reason or another, that I wont go into :)).

I have on the import from AD, some code to set an MV boolean flag – “functionalID” – if the DN of the person object contains the strings found in the Service Account OU’s, thenfunctionalID = True. This attribute is pushed into the portal and is used in set definitions.

So, I’m getting there. Now I need something to set another flag in the MV that will go to the portal. this one defines if the owner of the Service Account is approaching their end date (30 days prior):It is defined on the Import from AD and populates the MV attribute “functionalID-owner-expiring”

Of course after initial code definition, I found another of those inevitable exceptions, so added the generateArrayFromFile function, with a reference (in txt file) to the email address that should be ignored.

Create attribute and binding in the portal for FunctionalID-owner-expiring

Setup an Export in the FIMMA for the new attribute

Create a set: FunctionalID = True and FunctionalID-owner-expiring = True.

Create notification workflow and mail template: notification to [//Target/Manager], then the set transition MPR.

I think I have it, just need to do a little testing to see that it works as expected.

I’m still a long way from the stated goal, as I still need to find “owners” for all of those accounts that have been created in the past.

A Generic Array From File Function To Cope With Inevitable Exceptions

In the last few days, I have had a few more exceptions to cope with in my FIM Config.

  1. Another new mail suffix
  2. A user who is employed by one tenant, who has that tenants email address suffix; but who is on secondment to another tenant, who have a different mail suffix. The users attributes have been changed in the HR system, so that they gain access to the stuff in the other tenant, which is controlled by automatic groups, based on attribute data!

So, I’d been thinking for a while about having a method to add exceptions without having to add them to the code directly and thus forcing a rebuild followed by full syncs. I found a nice function to read a text file to an array, this is added to the top of the dll after the lines:

Public Class MAExtensionObject_YourMA
Implements IMASynchronization

So, to put this use – take my previous port regarding generating validating email addresses: https://blog.oholics.net/defining-a-unique-email-address-and-validating-mail-suffix/, at line 97 I ask “Does the suffix match?” This chunk is now as follows:

So, the referenced file simply has the email address of the user that I don’t want to be alerted about. If the email address does not match the expected value, look in the array generated from the text file; if it in not in there either raise an error to get this fixed or investigated.

Regarding the valid mail suffixes – I posted about this already: https://blog.oholics.net/emailaddresspresent-flag-setting-and-checking-email-suffix-validity/.

I have a hardcoded list of those that are already in use in the dll, if the suffix is not found in that array, it does a lookup of the array generated from the “suffixes” text file, if it is not in there it raises an error:

Console App for enumerating userAccountControl integer values

When trying something new out with FIM Development, I often see how to do it in a console app beforehand. Then once I have the process/ method worked out, I translate it into FIM code. Usually this is a very clean process and is quicker than editing the FIM code directly, then doing sync’s on individual accounts.

When I was initially looking at exporting userAccountControl values to AD, I used Jorge’s code snippet: https://jorgequestforknowledge.wordpress.com/2010/07/29/managing-the-useraccountcontrol-attribute-in-ad-by-fim/ as the basis for my code. Initially, I had some difficulty understanding the differences between the “Or’s and And’s”, so used a console app to understand what integer values the different combinations made. The list of flags can be found here: https://msdn.microsoft.com/en-us/library/windows/desktop/aa772300(v=vs.85).aspx

My userAccountControl Export code became a bit of a monster, due to the number of rules needed to match the existing configuration.

The console app is super simple – fiddle with the different flags and operators to see the different results:

Note, that you need to add the reference to “Active DS Type Library”, else you will get squiggles under “ADS_USER_FLAG”:

AddDSRef

emailAddressPresent Flag Setting and Checking Email Suffix Validity

In my Organisation, not all users have a mailbox, while others are just mail enabled.

In order to define XPATH filters for those people who should be allowed into a distribution list, managed by the portal, I needed to set a flag.

This boolean flag defines if they could be mailed and therefore should belong to a distribution list.

A nice add on was the fact that it allowed me to check the mail suffix of the user as part of the import process. Only those suffixes defined in the array are allowed, only suffixes that the Exchange Organisation is authoritative for. This check exists, because sometimes people just add a new suffix, or make a typo – this bit of code highlights those events.

Looking for (and finding) odd ID’s

As I have said previously, the HR data that feeds FIM is out of my direct control and has had some data quality issues.

As a result, I have ended up putting some consistency checking into my code. I’ll present a few from my MVExtension here, what I’m tending to be looking for is where the user has already been provisioned, but then the reference has been deleted in HR, but no-one has told me so that I can tidy up the account in AD and FIM.

The ID’s that showed up after putting in the bit looking for FIM only references, was caused by disconnecting a table that validates historical end dates. I was assured that I would not need it anymore, because the end dates would not be randomly set to a period in the past that mean that I would never receive that update… However, this did not pan out, so I re-attached the table, but did not reset the MV object deletion rule afterwards, so I ended up with ID fragments in the portal – referenced only by ObjectID.

Again I used the Lithnet PowerShell module to clear these up. There were around 40 to do, so I just got the ObjectID’s from the job xml, put them in a text file and ran this:

 

Cleaning and validating input data

The HR data source, that I currently receive person data from, has historically had data quality issues. These are much better than they were in the past, but still cause a few issues.

When I attended FIM training at OCG, I raised the issue of data cleanliness and was told in simple terms – make sure the input data is clean! If only life was so simple…..

Back to reality, I have had to add code to my Advanced Flows to deal with, clean up and validate the input data.

A nice example follows – importing Surname from HR – dealing with:

  • Just plain bad data (null as a string/ value)
  • Validation (characters that should not be present – via regex replace)
  • Clean up (removing spaces from around hyphens – double barrelled names).- there is also a bit of trimming to remove and spaces before or after the string value
  • Surname missing!

Things like this remind me of why “Codeless Provisioning” was something I fought to get working (for too long), but ultimately had to abandon in favour of using code for almost everything. Doing so has been a real panacea for all of the rules and other funnies that I have had to accommodate.

Note: I made a little edit – I was not checking for the presence of AccountName before raising errors – should that attribute have been missing (highly unlikely, but not unknown to occur), that would have raised an error in itself. The edited code is a little more robust!