A long lull, Certificate Authority distrust issues and a platform migration.

The last year or so has been interesting, moving from “normal” work to consulting, a great change! With the drawback of having less time to commit to getting stuff on here as well as greater concerns over intellectual property rights and confidentiality.

The various blogs/ sub-domains that I run, covered under the oholics.net moniker, all had free certificates issued by the (now effectively defunct) StartCom CA. In October 2016, Mozilla started to distrust them – backstory here: https://blog.mozilla.org/security/2016/10/24/distrusting-new-wosign-and-startcom-certificates/.

Initially, it looked like the low traffic sites were those that were affected (they had the newest certificates), so I kind of took the lazy approach of just leaving them as is. I figured that that start.com would get their new CA in place and trusted reasonably quickly, not so! I recently noticed that Mozilla/ Chrome had started distrusting even those certificates that were generated prior to October 2016. So now all of the blogs were generating security errors, this was not ideal. I looked at moving to Let’s Encrypt, but it would not have been a simple migration – more than a five minute job. Then, a few weeks ago, I noted that start.com had their new CA in place. Great! but when I asked their support people about global trust, the answer was “not yet”, with no idea of when that would be in place.

I was still running the hosting platform from home, which was less than ideal, given the lack of a fixed IP and intermittent issues with Dynamic DNS not updating, plus the running costs/ fire risk etc….

So, I recently made the decision to migrate the platform to a cloud provider and to get the certificates issues resolved properly, moving finally to Let’s Encrypt. On a fresh server, it was remarkably easy to setup, just requiring a little DNS Flip-Flopping to get things in order.

Now it is all in place/ tidy, I have a bunch of stuff to add to the blog. However, FIM is going EOL, so the name of this blog is going to become defunct too! Managing and maintaining the other blogs as separate entities is a bit of a PITA too. Therefore, I plan to (eventually) migrate content from all 4 blogs into one new core blog site – name TBD. I may add some stuff here in the meantime, just to get it out of my brain and onto paper, so to speak… else I may wait until I have done the migration component – depends on how long it might take..

Until then, I hope the previous content still provides a good repository for FIM “stuff” for others as well as myself 🙂

MIM PAM Automated Installation Script

I have been doing a fair bit of work with MIM PAM recently, finding a few issues. This has meant that I have re-installed the application (post-SharePoint Foundation), in my lab, a few times.

I was getting a little bored of clicking through the options, ticking boxes and refilling the URL’s etc. Then I spotted on the CD/ DVD, a batch file in the Service and Portal folder – called “Service and Portal_Reference_For_PAM_Install.bat“.

A quick look showed that this would automate MIM PAM installation. However, there was no documentation to go with it – notably to clarify which accounts were referred to by  “ADMIN_USER = Administrator” and “SYNC_ADMIN = FIMSyncAdministrator”. A quick google revealed no relevant results…. So, take a snapshot and start trying accounts….. Based on the MSI command run at the end of the script Admin User relates to the SERVICE_ACCOUNT_NAME. So, in my case that relates to the MIMService account.

Thus, my complete working script is as below. Note – my PAM domain is a sub-domain of oholics.net called “priv“, my MIM PAM server is called “mimpam

Note that the following lines will need to be amended – 7, 10, 17, 19, 20, 21, 22, 35, 54, 55, 62, 63, 70, 71.

Also, note the script assumes that you have a folder C:\Temp to write the log to – if you don’t you’ll get EXIT CODE: 1622

Nice bit of automation – run the file, then make coffee or whatever – certainly something fulfilling 🙂

A little update – the script in its current form is not perfect. Note that the MSI switches include: MAIL_SERVER=”%MACHINENAME%” and SQLSERVER_SERVER=”%MACHINENAME%” – Meaning that both attributes will be set to the local machine name. Set some more attributes and changes the MSI arg’s to suit.

Additionally, I have been testing the RESTful interface over the last few days and have seen some oddities – whether these are related to using this script to install is under investigation…..

New-PAMDomainConfiguration: There was no endpoint listening at http://localhost:5725/ResourceManagementService/MEX

Still suffering pain trying to get the MIM PAM lab setup on my underpowered Hyper-V System.

I was having a lot of issues with getting the New-PAMDomainConfiguration cmdlet to run successfully, so after lots of debugging; I gave up, trashed the current lab setup and started again, following the lab guide to the letter this time! Well, almost.. I only have two VM’s these are the DC’s for each domain, with everything crammed onto them.

A quick error and fix – as per the title:

New-PAMDomainConfiguration1

Issue was that the SQL service had not started, thus the Forefront Identity Manager Service had not started. Fix… start those pesky services and try again. I believe that the services are failing to start simply because of little resource (2 GB RAM only).

Now that was simple, but I’m still seeing the problems that I was seeing before; that being that when running the New-PAMDomainConfiguration after starting the services, I get the following unhelpful error:

New-PAMDomainConfiguration: The Netdom trust command returned the following error:

New-PAMDomainConfiguration2

Ah the “Blank Error” error – digging through the $error variable does not reveal anything useful. If I find a solution, I’ll be back….

I posted a question on the TechNet FIM forum:

https://social.technet.microsoft.com/Forums/en-US/be2433b4-daa6-493c-8922-684df506337d/newpamdomainconfiguration-the-netdom-trust-command-returned-the-following-error?forum=ilm2

The workaround provided by Jeff seems to have worked – well there were no errors executing the detdom commands. I have a few more bits to do to complete the lab and verify that all is working as expected.

Delegating Group Management – Using the Lithnet FIM PowerShell Module

Within my AD structure, group management is delegated within certain OU’s, I now need to replicate that functionality in the FIM portal.

The is no real way of identifying which groups should be managed by whom, except the OU within which the group currently resides.

So, to start off with I need to get the parent OU of the group into the portal:

Import the OU into the MV:

Setup an export flow for adOU into the portal.

Then, by using the Lithnet PowerShell Module, we can create all the sets and MPR’s required, below is a sample for creating one delegated “collection”. In production, my XML file is much bigger – delegating group management to around ten different groups.

Note, that you first need to create references to all users who might be given the rights to manage groups. This includes the FimServiceAdmin and FimServiceAccount – referenced by their ObjectID, the others are referenced by their AccountName. All members referenced in this section, are added to the __Set:GroupValidationBypassSet. This set is defined in the non-administrators set – not in this set – this bypasses the group validation workflow:

AllNonAdministratorsSet

Create a set of groups to be managed – the filter being the OU that the groups belong to & MembershipLocked=False

Create a set of administrators for this delegation – adding the explicit members

Then create the two MPR’s to allow the members of the administrative set to manage those groups – the first MPR allows modification (Read, Add and Remove) of the ExplicitMember attribute, while the second allows creation and deletion.

Use Import-RMConfig -File <PathToXML> -Preview -Verbose to validate your xml and see what it would do. Drop the “-Preview” to make the change

An Alternative To Using The Generic Array From File Function

While looking to improve on my method of getting exceptions or a long list of mail suffixes into an array, to be checked during code execution, I came across this: https://msdn.microsoft.com/en-us/library/windows/desktop/ms696048(v=vs.85).aspx

This seemed to me to be a really nice solution, just defining all exceptions and suffixes within one file, read it in on code execution, then check for existence or whatever in the code.

So, given the following xml file:

Add the System.Xml Import and declare the variables, so they are global:

Add the code to read the xml file into the Initialize Sub:

Then, when you wish to look for those values within those variables – just like in the last post:

An Update on my Generic Array From File post

In this post: https://blog.oholics.net/a-generic-array-from-file-function-to-cope-with-inevitable-exceptions/, I documented a method of generating an array of values from a text file.

While I was happy that this method worked, I was not entirely happy with the fact that I still had some hard coded values in the code. However, the way that the function operated meant that if I took my collection of mail suffixes (20+) and added them all to the text file, then the array would be built for each and every user that passed through the dll, not too efficient!

So, I was looking for something a little more elegant. I was happy for the array to simply be defined when the dll was loaded.

Here is my solution:

At the beginning of my AD MA, I declare my dates and logging levels etc, then generate those arrays using the function. These arrays are now static and are good for processing all users without being regenerated.

When I wish to look into the array to validate a valid email suffix for example, I go from this (as in the last post):

To this:

Much cleaner – plus all suffixes can now just reside in a text file.

Note that updates to the text file will only be realised if the dll is reloaded and the array is regenerated. I believe that this is after 5 minutes of inactivity and seems to hold true from testing.

SharePoint Foundation 2013 – Setup is Unable to Proceed due to following error, requires .Net Framework 4.5

I have been setting up a MIM PAM lab at home. Following the guide here: https://technet.microsoft.com/en-us/library/mt488766.aspx

I’m installing onto Server 2012 R2, with SharePoint Foundation 2013 SP1

After installing the SharePoint Foundation 2013 Prerequisites, using the Service Pack 1 release, all installs just fine. Then I run the setup.exe, very quickly I’m prompted with the message “Setup is Unable to Proceed due to following error, requires .Net Framework 4.5“. The prerequisite installation log (C:\Users\<MyUserAccount>\AppData\Local\Temp\) shows:

2016-01-29 22:39:36 – Check whether the following prerequisite is installed:
2016-01-29 22:39:36 – Microsoft .NET Framework 4.5
2016-01-29 22:39:36 – Reading the following DWORD value/name…
2016-01-29 22:39:36 – Install
2016-01-29 22:39:36 – from the following registry location…
2016-01-29 22:39:36 – SOFTWARE\Microsoft\Net Framework Setup\NDP\V4\full
2016-01-29 22:39:36 – The value is (1)
2016-01-29 22:39:36 – Reading the following string value/name…
2016-01-29 22:39:36 – Version
2016-01-29 22:39:36 – from the following registry location…
2016-01-29 22:39:36 – SOFTWARE\Microsoft\Net Framework Setup\NDP\V4\full
2016-01-29 22:39:36 – The value is…
2016-01-29 22:39:36 – 4.6.01055
2016-01-29 22:39:36 – A post release .NET 4.5 is installed

However, the SharePoint Foundation installation log shows:

2016/01/30 19:27:00:389::[3788] Catalyst .Net version check failed. Setup requires .Net Framework version 4.5.50501 to install this product

4.6.01055 is installed but the installer wants the older version – 4.5.50501.

So, after a bit of googling, I find this: https://social.technet.microsoft.com/Forums/sharepoint/en-US/bbed58e1-4a80-4dde-91fd-c6fc95bf85ac/sharepoint-2013-installation-with-net-framework-4550501-and-4550709?forum=sharepointadmin

In that post, Rick just sets all of the registry entries. However, the one that seems to be the ‘looked up’ value (at least based on my testing today), which is contrary to the prerequisite log, is HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v4\Client.

Detailed method:

Right click the Client key, choose Permissions…

DefaultPerms

Note that the Administrators group has Read access by default.

Click Advanced, note at the top of the window, the owner is shown as TrustedInstaller:

DefaultAdvancedPermissions

Click the Change link beside the owner label.

Change the scope (“From this location”), to the local machine, choose <localmachine>\administrators group:

SetOwner

Click OK, then OK again until we are back at the initial Permissions window, tick Full Control for the Administrators group:

AmendedPerms

OK, then go set the Version number to 4.5.50501

Install SharePoint Foundation 2013.

Once complete, put things back to how they were before – of course this is optional….. just depends on how lazy you feel and whether you want to be caught out by the fact that you changed this sometime in the future, then can’t figure out why something else doesn’t work. Making the change back is so straightforward and quick, I believe that it is worth doing 🙂

After installation, flip version back to 4.6.01055. Then, right click the Client key, remove full control from the Administrators group, apply. Then, click Advanced, go and change the Owner again –  as before but using NT SERVICE\TrustedInstaller as the owner:

ResetOwner

Click OK, until you are back to Square 1. Easy!

Process To Email The Manager Of A Service Account When Their End Date Is Approaching

A long term goal of mine, has been to get “account requestors” to take ownership of their Service Accounts.

Attempts have been made by my predecessors to record an owner of a service account, but it has simply been done as a string attribute of the AD object. Thus, when the person leaves and the account is deleted, the service account becomes orphaned, with an reference to a long forgotten ID.

So thinking of a way to carry this out….. I am already using the email address of the owner of an administrative account to make decisions about whether the administrative account should be enabled or disabled – based on the end date of the owner – discovered by looking up the email address in the MV.

I figured that I could do something similar for those Service Accounts. I’ll be creating service accounts via the portal, the owner of the account will be assigned to the manager attribute. So, how can I get the email address of the manager into the MV as a thing that I can lookup??? I can’t do an advanced flow rule on the FIMMA, and even if I could, Manager is a reference attribute, so I can’t do it anyway… I found an article about dereferencing another attribute, that get me going down this path….. The solution is simple. Create a new attribute and binding in the portal – “ManagerEmailAddress”, then setup a workflow as follows:

GetManagerEmailAddressWF

When the account falls into scope, the managers email address is set into that new attribute – in the sync engine create a direct flow to put that into the MV (I’m using “serialNumber” – for one reason or another, that I wont go into :)).

I have on the import from AD, some code to set an MV boolean flag – “functionalID” – if the DN of the person object contains the strings found in the Service Account OU’s, thenfunctionalID = True. This attribute is pushed into the portal and is used in set definitions.

So, I’m getting there. Now I need something to set another flag in the MV that will go to the portal. this one defines if the owner of the Service Account is approaching their end date (30 days prior):It is defined on the Import from AD and populates the MV attribute “functionalID-owner-expiring”

Of course after initial code definition, I found another of those inevitable exceptions, so added the generateArrayFromFile function, with a reference (in txt file) to the email address that should be ignored.

Create attribute and binding in the portal for FunctionalID-owner-expiring

Setup an Export in the FIMMA for the new attribute

Create a set: FunctionalID = True and FunctionalID-owner-expiring = True.

Create notification workflow and mail template: notification to [//Target/Manager], then the set transition MPR.

I think I have it, just need to do a little testing to see that it works as expected.

I’m still a long way from the stated goal, as I still need to find “owners” for all of those accounts that have been created in the past.

A Generic Array From File Function To Cope With Inevitable Exceptions

In the last few days, I have had a few more exceptions to cope with in my FIM Config.

  1. Another new mail suffix
  2. A user who is employed by one tenant, who has that tenants email address suffix; but who is on secondment to another tenant, who have a different mail suffix. The users attributes have been changed in the HR system, so that they gain access to the stuff in the other tenant, which is controlled by automatic groups, based on attribute data!

So, I’d been thinking for a while about having a method to add exceptions without having to add them to the code directly and thus forcing a rebuild followed by full syncs. I found a nice function to read a text file to an array, this is added to the top of the dll after the lines:

Public Class MAExtensionObject_YourMA
Implements IMASynchronization

So, to put this use – take my previous port regarding generating validating email addresses: https://blog.oholics.net/defining-a-unique-email-address-and-validating-mail-suffix/, at line 97 I ask “Does the suffix match?” This chunk is now as follows:

So, the referenced file simply has the email address of the user that I don’t want to be alerted about. If the email address does not match the expected value, look in the array generated from the text file; if it in not in there either raise an error to get this fixed or investigated.

Regarding the valid mail suffixes – I posted about this already: https://blog.oholics.net/emailaddresspresent-flag-setting-and-checking-email-suffix-validity/.

I have a hardcoded list of those that are already in use in the dll, if the suffix is not found in that array, it does a lookup of the array generated from the “suffixes” text file, if it is not in there it raises an error:

Console App for enumerating userAccountControl integer values

When trying something new out with FIM Development, I often see how to do it in a console app beforehand. Then once I have the process/ method worked out, I translate it into FIM code. Usually this is a very clean process and is quicker than editing the FIM code directly, then doing sync’s on individual accounts.

When I was initially looking at exporting userAccountControl values to AD, I used Jorge’s code snippet: https://jorgequestforknowledge.wordpress.com/2010/07/29/managing-the-useraccountcontrol-attribute-in-ad-by-fim/ as the basis for my code. Initially, I had some difficulty understanding the differences between the “Or’s and And’s”, so used a console app to understand what integer values the different combinations made. The list of flags can be found here: https://msdn.microsoft.com/en-us/library/windows/desktop/aa772300(v=vs.85).aspx

My userAccountControl Export code became a bit of a monster, due to the number of rules needed to match the existing configuration.

The console app is super simple – fiddle with the different flags and operators to see the different results:

Note, that you need to add the reference to “Active DS Type Library”, else you will get squiggles under “ADS_USER_FLAG”:

AddDSRef