Now that my notifications are ready to go and there are a few real recipients (testers), I was considering what would happen when the FIMMA jobs were run – lots of emails!
The FIMMA had not been run properly for a few days – this is still a system in building – so there were a lot of attribute changes to export. Also, I’d had to put an MA back in temporarily, to fix up some bad end dates – I needed to fix precedence so that this was now lower – thus quite a few end dates had changed in the intervening period. That MA will be removed again shortly…
So, I needed to quickly disable all of the MPR’s that would trigger sending the emails. Again the Lithnet RMA was a great and quick solution!
Comment out/ uncomment out the true/ false lines as appropriate. I’ll need to re-enable all of these again once the FIMMA has run through.
Here is the script used to migrate the clients from the old servers to the new domain based DFS namespace, note that from_srv and to_srv need to be amended and the case statement needs to be constructed based on the content of the Drive Mapping script output.
A long time before planning a file server migration, I thought that it would be interesting to see which drives were mapped by our users.
We do not use a login script or GPO Preferences to do this and there is a long history of “Stuff” on the old servers.
Initially, it was down to help the helpdesk staff to re-map drives on new machines; where the user didn’t know where the target was – it was just their J drive….
Coming back to the file server migration, the historical data in this file was invaluable.
By comparing the drive mapping data with the shares defined on the target server, a list of share names and their long path was produced. This list is then used to construct the case statement within the server migration/ re-mapping script.
I’ll publish the drive-mapping script here now, as it was easy to sanitise. I’ll provide the re-mapping and rollback scripts separately.
Note that this script must be run as a Logon script in the user part of a GPO. Additionally, the target folder is a very open hidden share.
This is another of my old, but nice scripts. There was some discussion at the time about how different Exchange team people, who work at different geographical sites, were dealing with mailbox delegation. There was no consistency.
Note that this script runs repeatedly (up to 60 times – 30 minutes) until it sees that the calendar permissions are set correctly. This was introduced to allow permissions to be set when our exchange infrastructure had been failed over to our other main site – a rare occurrence. The replication interval between those sites is set to 15 minutes. Hence, it should have successfully completed by 30 minutes or something more fundamental has gone wrong!
Interactive logon: Message title for users attempting to log on
Interactive logon: Message text for users attempting to log on
However, these settings provide no layout options whatsoever. To try to force some semblance of a layout, you need to introduce a character (we used *) at the beginning of each line then add spaces to get an indent. Adding carriage returns in the GPO has no effect! You could just plop the text in as one long line…. Either way it looks crap!
We also have some display screens scattered about that auto-logon. These machines has to be excluded from getting the legal notice, otherwise they would not auto-logon. The initial solution was to just create another OU for these machines to exclude them from receiving the policy settings. This is simple but messy, you forget to put them (pre-staged) in the right place before they are added to the domain, so you then need to trawl through the registry removing the settings….
I was looking for an alternative and found this article on the scripting guys website:
I used this as the basis for the script below. My version is applied at the root of the domain as a computer start-up script in a “Base” GPO. This way every client gets the legal text (except DC’s – the script is also applied to the DC’s separately).
Those auto-logon machines are handled automatically. A security group in the domain contains the computer accounts for those machines that you don’t want to get the legal text. If the computer account is in that group, the script ensures that the legal text is not set, if the machine already has the legal text and is then added to the group, the legal text is removed automagically!
OK, so although all the XML definitions worked fine and the objects were created, the differentiator between a New User and an Existing User moving into a Department was a little poor.
The new user email notification was configured to use an attribute that would make it clear to the reader of the mail if it was a new or existing person. However, by the time this attribute had reached the portal (for a new user), the mail had already been sent with a blank attribute referenced…. Not good!
So a little re-think…. I need another set of Sets to handle new versus existing people for each of those groups of people who want to know!
The existing Set: OU=Blah, wasused for both new and existing people where a Transition In is a New User or a user moving into that department. This one will now be used only for New Accounts only. A Transition Out was a person leaving that Department
A new Set was defined: OU=Blah AND where Portal CreationDate date is prior to Today – this will handle Existing users moving into or out of a department. Urgh, but that means that I need move in and out MPR’s, WF’s and templates, plus HTML and XOML. I am not provided Employee Start date from HR, so cannot use that.
Using the PowerShell Module, this is a little easier, but what started as just a re-hack of my original XML file turned into a slight re-design (tidy up, fixing some capitalisation (to allow for easier search/ replace) and better layout for readability) of the XML,
The existing Move MPR, WF and Template are re-purposed to become the ‘Active’ User move in (New User), then another series are created for Existing User move in and out (Movers). I also added another recipient for the initial period of testing the notifications – the real recipients will be added once I see that all is working as expected…
“—” is used as the search criteria and replaced with the relevant Department acronym – for me this joins everything up! Note that each template, once fixed, should be within the Lithnet wrapper:
The template job list is as follows:
And with slight modified XOML to match the XML, those being the references to recipients. which now has two references and the Email Template reference has its reference made to match the ID for that template in the XML, e.g.(showing “—” where the Department acronym should be) :
One of the things that I had to bear in mind for my FIM implementation was that the current legacy code provides email notifications of changes made. Due to the nature/ structure of the organisation, the different groups only want to know about changes to their own users.
So, when initially thinking about what people are currently informed about, and then the number of MPR’s, Workflows, Sets and Mail templates required to duplicate that functionality – I thought “no way!” It would be unmanageable. In the end and after discussion with the recipients of those mails, I got to a manageable number of things that those people really wanted to know about. These are:
A new user starting (a real new person), or a person who has moved horizontally within the organisation – they have moved INTO that department.
A user in their department becoming disabled
A user in their department becoming deleted
A user moving OUT of their department
So, I’m down to 4 things, but I have 9 different groups to inform.- So I have 36 Templates, 36 Workflows, 27 Sets and 36 MPR’s to define…..Even if I had a minion, I wouldn’t make them do that manually….
So, during Ryan’s presentation to the FIM User Group, I noted that he had introduced the ability to import these object by providing an XML file that defined them.
If all looks OK, run the command again without the Preview & Verbose switches.
My initial runs ran into errors due to the way that the employeeEndDate plus 180 days filter was being passed to the FIM service. Ryan has rapidly fixed and produced new versions to resolve these problems.
The great thing about this method, is that by using xmlref pointers, everything just joins up, resulting in a consistent set of objects that reference each other.
Note that the line:
Is used to provide a reference to the mail recipient, more recipients could be added, using the same structure, but a different id. Other pre-existing object references could also be added as appropriate.
So to summarise what is defined in the XML file:
4 Email templates – Add, Disable, Delete & Move – each with a corresponding and each slightly different html file
4 Workflows – Add, Disable, Delete & Move – each with a corresponding and each slightly different XOML file
3 Sets – User in OU=Blah, User in OU=Blah AND whose Status=Disabled, User in OU=Blah whose end date has passed, plus 180 days
4 MPR’s – Add, Disable, Delete & Move – all are transition in, except for the Move MPR, which is a transition out.
A few weeks ago, I created a new attribute in the portal called adOU. This attribute contained the Active Directory OU of a user and was defined in code from the HR database input – that’s where the OU definitions are defined…..
I planned to use the attribute to define Sets containing the users whose accounts resided in those OU’s. So, after creating and populating that attribute, I started looking at the Set’s Workflows, MPR’s and Email templates that I’d require to make use of that new attribute – these items were to be used to send email notifications.
When I got to the point of creating the Sets, I noted that I could not choose the adOU attribute as a criteria. I checked the relevant MPR’s, Admin filter etc., all looked OK. Then I realised that when I created the new attribute, I had configured it as an un-indexed string. Thus, this was the reason that I could not use it in my Set definition.
So, off I go trying to delete the binding and attribute, always fails – after doing the few normal bits that I remember that I need to do beforehand – I have been here a few times before!
So, its about time that I documented all of the steps, to save me and maybe others the pain in the future.
Delete the attribute mapping in the FIMMA:
2. Un-tick the attribute in the attribute chooser in the FIMMA
.
3. Remove any references to the attribute in any workflows, MPR’s, Sets, mail templates etc..Run an Export policy script like the following, then look for references (e.g. “[//Target/adOU]”) to the attribute in the resulting xml file:
4. Clear the attribute from all users. Previously, I have used an MPR and workflow to to this, but in this case, I found it unreliable. So turned to the LithnetRMA PowerShell module (https://lithnetrma.codeplex.com/):
5. Finally, delete the binding and then the attribute – they should go now – if not something has been missed.
6. Refresh the FIMMA schema
Now in my case, I then re-created the attribute – as an Indexed String, and then the Binding. Then refreshed the FIMMA Schema again, added the attribute to the FIMMA picker, recreated the FIMMA flow, reset the MPR’s (to allow the attribute to be managed) and Admin filter permissions. Then run the export to get the data back into the portal. The Set criteria now contains the option to use adOU.
I also need to detect where those addresses might need to be changed. The AD and Exchange infrastructure supports a number of different tenant organisations, each with their own needs. There is regular horizontal movement of users between these organisations.
So I have added to my code to define a unique email address – the additional content starts at Line 72 – statement “If mventry.Item(“mail”).IsPresent Then“. Note that attributes are set on the import from HR to define who is entitled to an MBX and those who should just be mail enabled. The current code just logs out those things for action, but I have also included making these events throw an exception to the Sync Engine.
It was interesting to see just how many people are not really entitled to a mailbox, but who have one anyway!
This resulting backup files from this script allows me to do a bare metal restore of a virtual domain controller within ~30 minutes. This assumes complete meltdown of your domain – catastrophic failure, schema issues, compromise, etc. – where you need to restore it from scratch – this is a last resort action!. Once restoration is complete move onto the rest of the recovery process. Of course this script will also do nicely for just backing up all critical drives and registry on any other server or client!