Working with customers who are starting their migration for identity and administration from on-premises to Azure, I see a couple options in the installation and configuration of Azure AD Connect that get missed. Particularly, once Azure AD Connect is installed and on-premises accounts are synced with Azure, customers find that their Active Directory managed devices are missing from Azure AD. And, of course, this means that Intune can’t see and manage these devices.
During the Azure AD Connect installation, there’s a configuration option available to “Configure Device Options.”
In this instance, the most common scenario for needing to rerun the Sync tool is because specific OUs that contained managed devices were missed during the initial configuration. By, altering the configuration so that the sync picks up the additional OUs you’ll see those missing managed devices shows up in Azure AD and be manageable using Intune.
One last thing…make sure you also assign an Enterprise Mobility Suite License to the synced users.
To assign an Azure AD Premium or Enterprise Mobility Suite License
Sign in to the Azure portal as an admin.
On the left, select Active Directory.
On the Active Directory page, double-click the directory that has the users you want to set up.
At the top of the directory page, select Licenses.
On the Licenses page, select Active Directory Premium or Enterprise Mobility Suite, and then click Assign.
In the dialog box, select the users you want to assign licenses to, and then click the check mark icon to save the changes.
Hey everyone, Theron (aka T-) here, Senior Consultant with Microsoft Consulting Services (MCS) specializing in Active Directory.
During an engagement with a customer a couple of years ago, I needed to identify some info regarding their domain controllers. They were in the process of deploying System Center Operations Manager (SCOM) at the time, but it wasn’t monitoring the DCs yet, so I couldn’t use it for the what I needed. They had ‘another’ management product that may have provided the info, but I wasn’t familiar with it and didn’t think trying to figure it out was worth the time it would’ve taken. Besides, that wouldn’t have been as interesting as scripting it.
So, with the assistance of a colleague, I wrote a quick script to gather pertinent info about all of the domain controllers in their environment. As with all of my scripts, there may be better ways of doing things, but this accomplished my goals. Also, with this particular script, there are probably things that could be added that would be valuable, but again, this accomplished my goals.
Basically, it’ll connect to each DC in the domain, gather the info and output it into a CSV which will be located in \Documents\Domain_Discovery_Output. The more domain controllers you have, the longer it’ll take to finish. Also, you’ll need to ensure Remote PowerShell requirements are met.
The script is written in PowerShell and located here.
It performs the following:
Checks to see if Domain_Discovery_Output folder exists.
If not, creates one under $Home\Documents.
Outputs a csv file to the Domain_Discovery_Output folder.
Gathers the following information about your domain controllers:
HKEY_LOCAL_Machine\SOFTWARE\Microsoft\System
Center Operations Manager
Usually
after deleting the above entries the installation works, however agent
installations from the console as well as manual installations still failed
with the above error, even after server reboot.
The successful work around I attempted is below:
MOMAgent.msi /qb NOAPM=1 USE_SETTINGS_FROM_AD=0
USE_MANUALLY_SPECIFIED_SETTINGS=0 MANAGEMENT_GROUP=managementGroupNameHere
MANAGEMENT_SERVER_DNS=MS Server Name here FQDN MANAGEMENT_SERVER_AD_NAME=MS Server
Name here SECURE_PORT=5723 ACTIONS_USE_COMPUTER_ACCOUNT=1
AcceptEndUserLicenseAgreement=1
After running the command in elevated command prompt the install was successful. Next I identified that the server did not show up in the Management Console. I then checked the configuration of the Microsoft Monitoring Agent under Control Panel and identified that the Primary Management server was showing “Not Available”.
Only once I deleted the entry and re-added it manually with changes applied, only then did the agent show up in the SCOM Console for approval under pending management.
Most of the time we use the familiar Azure portal to consume Azure Resources. That is all well and good. However sometimes we find that having the Azure CLI to do this is more easier, as once we perfect the script we can just run it, instead of having to use the Portal. In this post I present a PowerShell script that I used to
(a) turn on preview features,
(b) register them,
(c) check they are turned on, and
(d) finally to consume them. As of now, some of the preview features can only be turned by using the CLI.
Pre-requisites:
An Azure subscription
Access rights to the subscription
Azure CLI installed on your local (Client) machine from where you will be running the script.
First let’s get connected.
#################################################
##### AKS Preview features ######################
#################################################
## This allows you to have multiple node pools within a single cluster.
## Now you can deploy different applications exclusively to these node pools
## Also for cluster auto-scaling, one should have node pools. Without that
## AUTO-Scaling is not possible. Ofcourse you can manually scale.
## Note that node pools are built on top of the VM Scale Set capability of Azure Compute.
az --version
az login --tenant microsoft.onmicrosoft.com
az account set --subscription cxxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxf
###################################
###### Set of Registrations #######
###################################
## (1) If not registered, register the container service.
az provider register --namespace Microsoft.ContainerService
## (2) Install the aks-preview extension
az extension add --name aks-preview
## (3) Update the extension to make sure you have the latest version installed
az extension update --name aks-preview
## (4) Register the feature on the Microsoft.ContainerService namespace to have the MultiAgentPool feature (which is preview)
az feature register --name MultiAgentpoolPreview --namespace Microsoft.ContainerService
## (5) Check the status of the feature - it takes time. Only when registered can you go further.
## This should show as "registered".
az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/MultiAgentpoolPreview')].{Name:name,State:properties.state}"
############################################
### Now for creating the cluster via CLI ###
############################################
# Create a basic single-node AKS cluster :: We will create all things in this RG.
az group create --name myResourceGroup --location eastus2euap
## Please note the additional options of vm-set-type and load-balancer-sku
az aks create `
--resource-group myResourceGroup `
--name myPreviewK8S `
--vm-set-type VirtualMachineScaleSets `
--node-count 2 `
--generate-ssh-keys `
--kubernetes-version 1.15.4 `
--load-balancer-sku standard
az aks get-credentials --resource-group myResourceGroup --name myPreviewK8S
## Now go onto experiment with adding node pools
az aks nodepool add `
--resource-group myResourceGroup `
--cluster-name myPreviewK8S `
--name mynodepool `
--node-count 3 `
--kubernetes-version 1.15.4
I recently came across an issue where a user-assigned managed identity on a VM was not able to read the properties of the resource group where the VM object it was assigned to resided. As our deployment relied on these permissions being set it would fail until the permissions were added.
Normally, you could easily check this in the portal; however, in this case the user doing the deployment didn’t have portal access and had to rely on another person to add/remove the permissions. So they either had to go through the deployment and wait for it to fail or succeed or ping someone with portal access to go check the permissions.
In trying to determine a method for a user without portal access to verify the permissions, I came across this article, but it was geared towards system-assigned managed identities and required giving your virtual machines read rights on the resource group. Additionally, the article only states how to test the identity in Azure Commercial, which didn’t help me as my customer was in Azure Government.
Using this article as a general guide, I pieced together the following steps:
Open a terminal session to the Linux VM that has the user-assigned managed identity assigned
If you see the below error, it means the managed identity does not have read access
{"error":{"code":"AuthorizationFailed","message":"The client '6210fd8c-560e-499e-9fa2-1aeb6bfe2f64' with object id '6210fd8c-560e-499e-9fa2-1aeb6bfe2f64' does not have authorization to perform action 'Microsoft.Resources/subscriptions/resourceGroups/read' over scope '/subscriptions/SUBID/resourceGroups/RG' or the scope is invalid. If access was recently granted, please refresh your credentials."}}
Hey everyone, Theron (aka T-) here, Senior Consultant with Microsoft Consulting Services (MCS) specializing in Active Directory.
Fine-Grained Password Policies (FGPP) have been around for a while, but in my experience with various customers, they aren’t used often, if at all. This post is an attempt to simplify them, provide some details and list some of the PowerShell CMDLets you can use to manage them. There are plenty of resources out there that outline how to implement them, so I won’t get into that.
FGPP?What?
Windows Server 2008 and above operating systems provide organizations with a way to define different password and account lockout policies for different sets of users in a domain. In Windows 2000 Server and Windows 2003 Server Active Directory domains, only one password policy and account lockout policy could be applied per domain. These settings were specified in the Default Domain Policy for the domain. Thus, organizations that wanted different password and account lockout settings for different sets of users had to either create a password filter or deploy multiple domains.
You can use Fine-Grained Password Policies to specify multiple password policies within a single domain. You can also use them to apply different restrictions for password and account lockout policies to different sets of users in a domain. For example, you can apply more restrictive settings to privileged accounts and less restrictive settings to the accounts of regular users. In other cases, you might want to apply a special password policy for accounts whose passwords are synchronized with other data sources.
Here are some of the details of FGPPs that may help you understand their use a little better:
For the Fine-Grained Password Policy and account lockout policies to function properly in a given domain, the domain functional level of that domain must be set to Windows Server 2008 or greater.
Fine-Grained Password Policies apply only to global security groups and user objects (or inetOrgPerson objects if they are used instead of user objects).
A Fine-Grained Password Policy is referred to as a Password Settings Object (PSO) in Active Directory.
Permissions: By default, only members of the Domain Admins group can create PSOs. Only members of this group have the Create Child and Delete Child permissions on the Password Settings Container object in Active Directory.
In addition, only members of the Domain Admins group have Write Property permissions on the PSO by default. Therefore by default, only members of the Domain Admins group can apply a PSO to a group or user.
The appropriate rights to create and apply PSOs can be delegated, if needed.
Delegation: You can delegate Read Property permission of a PSO to any other group (such as Help desk personnel or a management application) in the domain or forest. This allows the delegated group to see the actual settings in a PSO.
Users can read the msDS-ResultantPSO or the msDS-PSOApplied attributes of their user object in Active Directory, but these attributes display only the distinguished name of the PSO that applies to the user. The user cannot see the settings within that PSO.
A PSO has attributes associated with all of the settings that can be defined in Account Policies section of a Group Policy, except for Kerberos settings.
Enforce password history
Maximum password age
Minimum password age
Minimum password length
Passwords must meet complexity requirements
Store passwords using reversible encryption
Account lockout duration
Account lockout threshold
Reset account lockout after
In addition, a PSO also has the following attributes:
msDS-PSOAppliesTo. This is a multivalued attribute that is linked to users and/or group objects.
Precedence. This is an integer value that is used to resolve conflicts if multiple PSOs are applied to a user or group object.
Settings from multiple PSOs are not cumulative. Only the PSO with the highest precedence, lowest number, is applied.
Read that last bullet again, it’s important!!
PowerShell and all of its Goodness:
While there are several ways to get information about a PSO, assign a PSO, remove assignment of a PSO, or to figure out what settings are applied to a user/group, PowerShell is the easiest…in my opinion.
To recap, Fine-Grained Password Policies are a way to apply different password/account lockout policies to various users/groups within a domain. Using them to shorten the password age of your administrative accounts is a sure way of improving security by forcing their passwords be changed more often. Who isn’t up for improved security?
This is a continuation of a series on Azure AD Connect. The recently published blog post covers a quick introduction to the troubleshooting task available in Azure AD Connect. This post goes through options that are available in Azure AD Connect to apply filtering on objects that should be synchronized. I provide links to all other related posts in the summary section below.
Filtering in the Azure AD Connect installer
The Azure AD Connect sync: Configure filtering document goes through a lot of detail on how you can control which objects appear in Azure AD based on filtering options that are configured. The scope of this post is just the following options, which are available in the Azure AD Connect installer:
Domain-based filtering
Organizational unit (OU)-based filtering, and
Group-based filtering
Domain and OU based filtering
I am combining the domain and OU filtering options as they are covered in one screen of the installation wizard. Using the installation wizard is the preferred way to change both domain-based and OU-based filtering. To get to this screen, we need to follow the custom installation path of the installation wizard. I cover this option here, and I’ll just skip to the place where we have the ability to customize synchronization options. This option is available under additional tasks once custom installation is selected.
This additional task requires credentials of a global administrator account in the Azure AD tenant to proceed. Provide a valid set and click next to move on.
The next screen shows the directories that are already configured. I only have one forest – idrockstar.co.za.
We are now at the first filtering option – domain and OU. To simplify demonstration, I synchronize everything in the child domain (east.idrockstar.co.za) and only the Sync OU in the root domain (idrockstar.co.za).
Let’s explore the second filtering option.
Group based filtering
Moving along brings us the second part – filter users and devices. Here, we specify a group containing objects that we wish to synchronize.
Note that this is currently only intended for pilot deployment scenarios. Nested groups are not supported and will be ignored.
Provide either the name or the distinguished name of the group and resolve to validate, then click next to proceed. This will be followed by selecting optional features and finalizing the configuration.
Testing the effect of filtering
For demonstration and testing, I created three accounts as follows:
First Rockstar – in the synchronized OU and a member of the sync group
Second Rockstar – in the synchronized OU and not a member of the sync group
Third Rockstar – not in the synchronized OU and a member of the sync group
Synchronization Service Manager
Let’s take a quick look at the synchronization manager to see what happens. Only one of the three objects we just created is exported.
Clicking the adds link under export statistics takes us to the object. The properties button exposes details. In this case, we see the object that matches both the OU and group membership synchronization requirements – First Rockstar.
I’ll cover the synchronization service in detail in a future blog post.
Azure Active Directory
To confirm, I also logon to the Azure AD tenant, select users and search for rockstar. The search only returns the only account that was synhronized, which met the criteria.
Summary
I just covered the two synchronization filtering options available in the Azure AD Connect installer – domain/OU and group-based filtering. I’ll take a closer look at the synchronization service in the follow up blog post soon.
With Windows Server 2008/2008 R2 approaching end of support, more organisations are upgrading their Operating Systems to the latest supported versions.
Upgrading of Active Directory Domain Services (AD DS) requires a schema update, and ultimately raising the domain and forest functional levels. Customers are concerned that applications may stop functioning after raising the functional levels, and traditionally there was no turning back once functional levels are raised.
Since the introduction of Windows Server 2008 R2 it is possible to downgrade your functional levels. We are receiving more questions regarding Active Directory functional level downgrade capabilities, as organisations plan their migration to Windows Server 2016/2019. There seems to be a misunderstanding of the downgrade capabilities, especially where the Active Directory Recycle Bin is enabled.
You may find this post by Jose Rodrigues useful. It provides information on the importance of the Microsoft Product Lifecycle Dashboard, which can help identify if products are no longer supported or reaching end of life, and keep your environment supported.
Disclaimer
We always recommend in-depth testing in a LAB environment before completing major upgrades in your production environment if possible. At a minimum, ensure that you have a well-documented and fully tested forest recovery plan. Active Directory functional level rollback is not a substitution for these core recommendations.
The basics
The Domain Functional Level (DFL) for all the domains in a forest has to be raised first, before you can raise the Forest Functional Level (FFL). When attempting to downgrade (lower) the DFL of a domain, you would first need to downgrade the FFL to the same level as the required DFL to be configured. The FFL can never be higher than the DFL of any domain in the forest.
Functional levels determine the available AD DS domain or forest capabilities. They also determine which Windows Operating Systems can be installed on Domain Controllers in the domain or forest. You cannot introduce a Domain Controller running an Operating System which is lower than the DFL or FFL. This needs to be considered when upgrading functional levels but would not have any impact when downgrading functional levels.
Distributed File Service Replication (DFSR) support for the System Volume (SYSVOL) was introduced in Windows Server 2008. Whether you are using Distributed File Service Replication (DFSR) or File Replication Service (FRS), it will not impact the ability to complete a functional level rollback.
Tip: SYSVOL replication should be migrated to DFSR before deploying Windows Server 2016 (Version 1709) or Windows Server 2019 Domain Controllers. FRS deprecation may block the Domain Controller deployment. Beystor Makoala posted a great article about FRS to DFSR Migration and some issues you may experience.
Let’s explore another feature that was introduced with Windows Server 2008 R2.
Active Directory Recycle Bin
The Active Directory Recycle Bin was first introduced with Windows Server 2008 R2. Considering the functional level rollback capability was also introduced with Windows Server 2008 R2, there were clear instructions on rollback capabilities.
You cannot roll back to Windows Server 2008 functional level after the Recycle Bin is enabled. Simple reason being that Windows Server 2008 doesn’t support the Recycle Bin, and the Recycle Bin cannot be disabled.
I’ve seen inconsistent information regarding rollback capabilities when working on newer Operating Systems such as Windows Server 2016 or Windows Server 2012 R2. Some articles indicate rollback cannot be performed at all after the Recycle Bin is enabled and others indicate the lowest functional level that can be utilized is Windows Server 2012.
The Recycle Bin was the only blocker when attempting to lower functional levels initially. The Recycle Bin has been supported since Windows Server 2008 R2 and thus it has no impact when working with any functional levels higher than Windows Server 2008 R2 (which all support the Recycle Bin feature). The Recycle Bin will only be a blocker when attempting rollback to Windows Server 2008.
Summary
We’ve discussed several Active Directory features and their impact when lowering Active Directory functional levels. We’ve determined that, in theory, the lowest functional level that can be utilized with the Active Directory Recycle Bin enabled is Windows Server 2008 R2, and the lowest functional level that can be utilized with the Active Directory Recycle Bin disabled is Windows Server 2008.
In part 2 of this series, I will demonstrate how to lower the domain and forest functional levels, and test the theory to determine the lowest functional levels that can be utilized while running a Windows Server 2019 Active Directory Domain.
In part 1 of this series, we established in theory that we can lower the Active Directory functional levels from the latest functional level to Windows Server 2008 R2, or even Windows Server 2008 if the Active Directory Recycle Bin is not enabled.
I will now demonstrate how to lower the functional levels from Windows Server 2016 to Windows Server 2008.
Lab Configuration
I’ve deployed a three-domain forest with Windows Server 2019 Domain Controllers. This is a root domain with two child domains. The Forest Functional Level (FFL) is Windows Server 2016 and the Active Directory Recycle Bin is disabled (it is not enabled by default when deploying a new forest).
Viewing the forest configuration using Active Directory Domains and Trusts
Viewing domain and forest functional levels using Windows PowerShell
When creating a new Active Directory forest on Windows Server 2019, you can select Windows Server 2008 as the functional level. This should indicate functional level compatibility when using the latest Windows Operating Systems. There is no option to select a Windows Server 2019 functional level. This is because no new functional levels were added with the release of Windows Server 2019.
In the following demonstration, I will attempt to lower the functional level of the root domain (root.contoso.com) and a child domain (child1.root.constoso.com).
The basics
You should be a member of the Enterprise Admins group to raise or lower the FFL and a member of the Domain Admins group to raise or lower the DFL. Enterprise Admins, by default, should have Domain Admin rights in all the domains. Read more on default Active Directory security groups here.
Unlike raising the functional levels, downgrading (lowering) the functional levels can only be accomplished using Windows PowerShell. There are no Graphical User Interface (GUI) tools to accomplish this task.
The Active Directory Module for Windows PowerShell is required for the commands that we will use. Find more information on this module here.
We will use Set-ADForestMode to lower the Forest Functional Level (FFL) and Set-ADDomainMode to lower the Domain Functional Level. You can also use these commands to raise the functional level instead of using the Active Directory Users and Computers, or Active Directory Domains and Trusts management consoles.
Downgrading the Forest Functional Level: Active Directory Recycle Bin disabled
The Forest Functional Level (FFL) should be lowered first before the Domain Functional Level (DFL) can be lowered. Attempting to lower the DFL before the FFL will result in the error below:
Set-ADDomainMode : The functional level of the domain (or forest) cannot be lowered to the requested value
Ensure you are logged on with an Enterprise Admin account. Open Windows PowerShell, enter and execute the following command to lower the FFL of the forest:
I am using the domain and forest names of my lab environment. Replace the -Identity and -Server switches with the appropriate domain names of your environment.Adding -Confirm:$false at the end of the command prevents being prompted to confirm your actions.
No confirmation message is received to confirm that the command was executed successfully. Not receiving any error messages is good. We need to verify the FFL to confirm that the functional level was lowered successfully. This can be completed using the following command in Windows PowerShell:
Get-ADForest | select Name,ForestMode
I want to verify the DFL of the domains, after the FFL was lowered, before I move on to the next step of lowering the DFL of the root domain. I use the following code in Windows PowerShell to accomplish this:
Downgrading the Domain Functional Level: Active Directory Recycle Bin disabled
The FFL was successfully lowered to Windows Server 2008 while the DFL for all domains are still on Windows Server 2016. I will now lower the DFL of the root domain. I am still logged on with an Enterprise Admin account. Enter and execute the following command in Windows PowerShell to lower the DFL of the root domain:
Again, there is no confirmation message that the command was executed successfully and not receiving any error messages is good. Let’s review the DFL of all domains to confirm that the DFL of the root domain was lowered successfully.
I now want to attempt to lower the DFL of a child domain in the forest.
Please note that any of the domains can be lowered in any order, there is no dependency on the root domain DFL being lowered before lowering the DFL of any child domains. The only requirement is lowering the FFL before lowering the DFL of any domain in the forest.
I am still logged on with an Enterprise Admin account and Windows PowerShell is open. The command syntax is the same except for -Identity and -Server switches which should now be the Fully Qualified Domain Name (FQDN) of the child domain.
Attempting to lower the DFL when not logged onto the target domain, as I am doing now with the Enterprise Admin account, may result in an error: Set-ADDomainMode : A referral was returned from the server.
This is prevented by using the -server switch and specifying the Fully Qualified Domain Name (FQDN) of the target domain, as I have done in all my previous steps.
The command executes without any confirmation message or errors. Viewing the DFL of all domains confirms that the DFL of the child domain was successfully lowered to Windows Server 2008.
Summary
I’ve demonstrated that the Active Directory functional levels can successfully be lowered from a Windows Server 2016 functional level to Windows Server 2008 functional level. It is important to note that this was achieved with the Active Directory Recycle Bin disabled.
In part 3 of this series, I will raise the functional levels back to Windows Server 2016, enable the Active Directory Recycle Bin and attempt lowering the functional levels again.
Recently I was posed a question where a customer wanted their users to experience a more advanced or informative when software gets installed. They also required that data be saved so that users work are not affected.
Requirements: The Mimecast for Outlook add in had to be installed (This requires Outlook to close) but users should be clearly warned to save data and then re open Outlook after completion (or instruct user to re open).
The Investigation
Option 1: System Center Configuration Manager “Run another program first”.
My first attempt was to create a Powershell Pop Up and turn that into a package that could run using the ConfigMgr feature “Run another program first” to warn users to close Outlook and save data. This Pop up we had to specify the time it waits which means for slower machines it would not wait long enough and faster machines it waited too long. Although the worked to an extent, I was not yet satisfied with the end product.
Run Another Program FirstTimer Pop UpDuring InstallAfter Install
PowerShell Code below:
#.\PopupTimer.ps1 -Mimecastlocation
param (
$MimecastLocation
)
#script for balloon notification
[void] [System.Reflection.Assembly]::LoadWithPartialName("System.Windows.Forms")
$objNotifyIcon = New-Object System.Windows.Forms.NotifyIcon
#script to pop up window
$wshell = New-Object -ComObject Wscript.Shell
$wshell.Popup("Mimecast needs to close Outlook to install the latest add in. Please save all your work. Outlook will close automatically in 5 minutes",0,"Done",0x1)
#wait for user to close outlook
Wait-Event -Timeout 300
#force outlook to close
Stop-Process -Name 'OUTLOOK'
#wait before mimecast install
Wait-Event -Timeout 30
#pop up balloon notification
$objNotifyIcon.Icon = ".\Mimecast_M_2015.ico"
$objNotifyIcon.BalloonTipIcon = "Info"
$objNotifyIcon.BalloonTipText = "Do Not Open OUTLOOK while Mimecast is installing"
$objNotifyIcon.BalloonTipTitle = "Install Mimecast Add-In"
$objNotifyIcon.Visible = $true
$objNotifyIcon.ShowBalloonTip(5)
#Run msi to install mimecast
Invoke-Command -ScriptBlock {cmd /c msiexec.exe /i ".\Mimecast_for_outlook_7_7_x64.msi" /qn}
#Balloon Pop up for completion
$objNotifyIcon.BalloonTipText = "You can now open OUTLOOK"
$objNotifyIcon.BalloonTipTitle = "Install Mimecast Add-In was successful"
$objNotifyIcon.Visible = $true
$objNotifyIcon.ShowBalloonTip(5)
#wait after mimecast install
Wait-Event -Timeout 30
#start up Outlook
Start-Process 'OUTLOOK'
Option 2: The PowerShell Application Deployment Toolkit (PSappDeployToolkit)
This neat little toolkit that can be downloaded from https://psappdeploytoolkit.com/ surpassed my expectations when it comes to capabilities, features and design. It has an amazing manual that I will be quoting for the sake of this post.
Extract the Package
Extract the toolkitCopy the MSI required for installation
Modify the PowerShell
Edit the Deploy-Application.ps1 Fill in details [line 64 – 72]Fill in Options [line 121]Fill in Options [line 141] [line 151]Fill in Options [line 161] [line 181]
Create the ConfigMgr Application (PSAppDeploymentToolkit Admin Guide)
In this blog I will look at how to convert an existing corporate device to Autopilot.
Configuration
Ensure you have an AD/AAD group that contains the existing corporate devices that you would like to target for Autopilot conversion.
Open the Azure portal and navigate to Microsoft Intune > Device enrollment > Windows enrollment
On the Device enrollment – Windows enrollment blade, select Deployment Profiles in the Windows AutoPilot Deployment Program section
On Windows AutoPilot deployment profiles blade, either select Create profile or select [existing deployment profile] > Properties
On the Create profile blade or the [existing deployment profile] – Properties blade, the setting Convert all targeted devices to AutoPilot must be switched to Yes
On the Assignments blade, select the group that contains all the devices you would like to target
I will target the following device by adding it to the AD/AAD group:
Once the device is added to the targeted group you can confirm by navigating to Microsoft Intune > Device enrollment > Windows enrollment > Windows Autopilot Devices. The process takes a couple of minutes as it assigns the profile to the device.
When you select the device you will be able to confirm that the Profile is assigned and what profile was assigned:
Now that the device has been converted to Autopilot, the device can be reset. The AutoPilot Resetwill only be available in the console once the device has been reset and gone through the Autopilot Deployment process once. To test this newly added device I will reset the device by either doing a manual reset in Windows Settings or initiating a Wipe in Intune. The device will reset and start the Autopilot Deployment.
After completing the Autopilot Deployment we now have the ability to do an Autopilot Reset in the Intune Console.
Summary With the Convert all targerted devices to Autopilot option you can easily convert corporate owned devices without the need to import any data.
NB! All corporate owned, non-Autopilot devices in assigned groups will register with the Autopilot deployment service.
I recently assisted a customer with Name Server (NS) records in DNS, disappearing from their DNS zones. All of the Domain Controllers are configured as DNS servers, yet when viewing the NS records for the Active Directory-integrated DNS zones, only a few of these servers had NS records.
The administrators manually re-added the NS records to the DNS zones, only to find that the NS records were missing when reviewing the DNS zone configurations later.
Background
Every DNS server that is authoritative for an Active Directory-integrated DNS zone creates its respective NS record in the DNS zone, which also means that the replication scope of the DNS zone will determine which servers are registered for the specific DNS zone.
When a DNS zone is replicated to all DNS servers in the forest, the zone will contain NS records for all servers in the forest, and when the zone is replicated to all DNS servers in the domain, the zone will only contain NS records for servers in the specific domain where the Active Directory-integrated DNS zone is created.
Active Directory-integrated DNS zone replication scope
Forest-zone replication scope:ContainsDNS servers from the all domains in the forest.
Domain-zone replication scope: Contains DNS servers from the specific domain only.
The NS records can be managed by selecting the properties of the DNS zone in DNS Manager.
In most deployments, every Domain Controller is also a DNS server.
The DNS Server will create the NS record and Active Directory replication will propagate the change to the relevant DNS Servers, as per the configured DNS zone replication scope.
When NS record registrations are functioning properly, these NS records can be removed from the DNS zone, and the NS records will be re-added when the DNS Server service is restarted.
In this instance, the customer manually added the missing NS records but they were being removed when the DNS Server service restarted.
Resolution
There are two configurations that may impact the creation of NS records in DNS:
Configuration in the Windows registry of a DNS Server, which affects all DNS zones hosted by the DNS server.
Configuration on a DNS zone, which may affect any DNS Server hosting the configured DNS zone.
The registry
In the registry of an affected DNS Server, find the DNS Server service parameters at the following location:
The following registry value, by default, does not exist in the registry and has to be manually created when required:
Registry value: DisableNSRecordsAutoCreation Data type: REG_DWORD Data range: 0x0 | 0x1 Default value: 0x0
If this registry value exists and is set to 1, the DNS server will not automatically create NS records, for all Active Directory-integrated DNS zones hosted by this server. Changing the value to 0 or deleting the entry will reset automatic NS record behavior to default, resulting in the DNS Server creating NS records for all Active Directory-integrated DNS zones that it is hosting. You must restart the DNS Server service for this value to take effect.
This registry value did not exist on the customers DNS Servers, which is the default configuration, and thus the server would attempt to create a NS record.
The DNS Zone
View the AllowNSRecordsAutoCreation configuration of the DNS zone, use the following command:
With default configuration the results should be as per the image below. This means all DNS Servers are allowed to automatically create NS records for the zone.
Default configuration
In the customers environment we executed the same command and received different results as per the example below:
Customized configuration
What this result means is that the DNS zone is restricted to allow NS record registrations only from the two specific IP addresses listed in the result.
When there are 50 DNS servers for example and only 10 IP addresses are listed, only those 10 servers will be able to create their NS records for the specific zone.
This would explain why only some NS records are listed, and not the records from all the DNS servers in the forest or domain. This was causing the NS records on the customer environment to be removed after they have been manually added.
This was easily fixed by executing the following command, which will reset the NS records creation configuration to the defaults, for the specific DNS zone:
The command needs to be completed for each DNS zone to configure, but only needs to be executed on one DNS Server. Active Directory replication will propagate the changes as per the configured DNS zone replication scope. You can wait for the NS records to be created automatically, or restart the DNS Server service on the affected servers to speed up the process.
Conclusion
There are very specific situations where an adminitrator may need, or want to limit the creation of NS records. There may be a requirement to limit NS record creation for a specific DNS zone to only a few servers, or you may want to prevent specific DNS servers from creating NS records in all the DNS zones that it is hosting, for example a DNS Server in a branch office.
Feel free to explore the reference article for specific instances where NS record registrations may need to be limited.
Be sure to document any changes made on DNS servers or DNS zones and specify the reason for the specific configurations. This will ensure future administrators understand the configurations, and when reviewing these custom configurations, also have enough information to determine if they are still required.
Reference:
Problems that can occur with more than 400 Domain Controllers in Active Directory integrated DNS zones:
In part 2 of the series we’ve successfully lowered the Forest Functional Level (FFL) and Domain Functional Level (DFL) to Windows Server 2008. The demonstration was completed in a forest where the Active Directory Recycle Bin was not enabled.
In this final part of the series, I will first raise the functional levels back to Windows Server 2016, enable the Active Directory Recycle Bin, and then lower the functional levels. As determined in part 1 of the series, we should be able to lower the functional levels to Windows Server 2008 R2 but not Windows Server 2008.
Lab Configuration
The Forest Functional Level is set to Windows Server 2008 and the Domain Functional Level of the root domain (root.contoso.com) and a child domain (child1.root.contoso.com) is also set to Windows Server 2008. The remaining child domain (child2.root.contoso.com) is set to Windows Server 2016.
Forest and domain functional levels viewed using Windows PowerShell
Raising the Domain Functional Level (DFL) and Forest Functional Level (FFL)
We’ve determined that the FFL cannot be lower than the DFL of any domain in the forest, which means the DFL of the root and child domain needs to be raised to Windows Server 2016 first. Let’s see what happens when we attempt to raise the FFL to Windows Server 2016 first.
In Windows PowerShell I run the following command:
The result is no confirmation or error message which we already know means that the command completed successfully. How is this possible when we haven’t raised the DFL of all the child domains? Let’s confirm this using Windows PowerShell:
The results in PowerShell indicates that while raising the FFL to Windows Server 2016, the DFL of all the domains were automatically raised to Windows Server 2016.
Be careful not to raise the FFL by mistake when planning on changing the DFL of a single domain. This may result in unknowingly raising the DFL of all your domains in the forest.
I should also note that this will fail if all the Domain Controllers are not on the required Operating System version. In the following example I attempted the same action, but a Windows Server 2012 R2 Domain Controller still existed in a child domain. I received an error message:
Set-ADForestMode : The functional level of the domain (or forest) cannot be raised to the requested value, because there exist one or more domain controllers in the domain (or forest) that are at a lower incompatible functional level.
The FFL is raised to Windows Server 2016 and now we can enable the Active Directory Recycle Bin to determine the outcome of lowering the functional levels with the recycle bin enabled.
Enable the Active Directory Recycle Bin
Windows PowerShell can be used to verify if the Recycle Bin is enabled or not.
Get-ADOptionalFeature -Filter ‘name -like “Recycle Bin Feature”‘
We can see from the PowerShell results that the required FFL to enable the Recycle Bin is Windows Server 2008 R2. The EnabledScopes attribute indicates whether the Recycle Bin is enabled or not. The current value is blank which means that the Recycle Bin is not enabled in this forest yet.
The following command is used in PowerShell to enable the Recycle Bin. Replace -Target with the forest root domain Fully Qualified Domain Name (FQDN).
Enable-ADOptionalFeature ‘Recycle Bin Feature’ -Scope ForestOrConfigurationSet -Target root.contoso.com
You will be prompted to confirm your actions. Take note of the warning that this action is not reversible. The Recycle Bin cannot be disabled after it is enabled. No confirmation message is provided to confirm that the Recycle Bin was successfully enabled. Again, no error messages are good.
This should also prevent lowering the Forest Functional Level to Windows Server 2008, because the recycle bin was only introduced with Windows Server 2008 R2.
I will run the Get-ADOptionalFeature command again to verify that the Recycle Bin status in PowerShell again.
The EnabledScopes attribute is no longer blank. This is the indicator that the Recycle Bin is enabled in the forest.
Downgrading the functional levels: Active Directory Recycle Bin enabled
The FFL will now be lowered. The first attempt was to set the FFL to Windows Server 2008 which failed as shown in the screenshot. We then attempt lowering the functional level to Windows Server 2008 R2 which resulted in no error or success message, which indicates the FFL was lowered successfully.
Set-ADForestMode : The functional level of the domain (or forest) cannot be lowered to the requested value
Verify that the FFL is lowered to Windows Server 2008 R2
The DFL of the child domain (child2.root.contoso.com) will now be lowered.
The first attempt was to set the DFL to Windows Server 2008 which failed as shown in the screenshot. The second attempt set the DFL to Windows Server 2008 R2.
Verify the Domain Functional Level. The DFL of the child domain was successfully lowered to Windows Server 2008 R2.
Conclusion
I’ve successfully demonstrated that the Active Directory functional levels can be lowered from Windows Server 2016 functional level, to Windows Server 2008/2008 R2 functional levels, depending on whether the Active Directory Recycle Bin is enabled or not.
The rollback can be completed from any functional level since Windows Server 2008, just keep the Active Directory Recycle Bin in mind when raising the functional level from Windows Server 2008.
If you are planning on upgrading your Active Directory infrastructure, whether this is from Windows Server 2008/2008 R2 or Windows Server 2012/2012 R2, you should now be able to complete this with more confidence. Raising the Active Directory functional levels should be an easier step, knowing you have the option of rolling back to the previous functional level should you experience any unexpected issues.
This is a continuation of a series on Azure AD Connect. In the previous blog post, we looked at filtering options that can be used to control which objects are synchronized from on-premises directories to Azure AD – domain, OU and group filtering. I would like take a closer look at groupfiltering here, and discuss some gotchas that I briefly touched on in previous posts of this series. If you have not seen the previous blog post on object filtering using Azure AD Connect, I suggest you start here. Other related (previous) posts are provided in the summary section below.
Security Group Filtering
The filtering on groups feature allows you to synchronize only a small subset of objects for a pilot. Group-based filtering can be configured the first time Azure AD Connect is installed by using the custom installation option. Details are available in this document, which also highlights the following important points:
It is only supported to configure this feature by using the installation wizard
When you disable group-based filtering, it cannot be enabled again
When using OU-based filtering in conjunction with group-based filtering, the OU where the group and its members are located must be included (selected for synchronization)
Nested group membership is not resolved – objects to synchronize must be direct members of the group used for filtering
Let’s go through some cases to demonstrate:
(1) nested groups, and
(2)what happens when the group used for filtering is moved to a different OU.
The case of the nested group
In the previous post on filtering, we only had two user objects in the security group that we use for filtering – First Rockstar and Third Rockstar.
The name of the group is IDRS Sync in this example
I have just added a group named Nested Group to the sync group in order to demonstrate the requirement for direct membership. Members of the IDRS Sync group are now:
First Rockstar (user)
Third Rockstar (user)
Nested Group (security group)
Nested Group is a security group containing one member – Fourth Rockstar as shown above.
With this in place, a quick look at the Troubleshooting Task that I introduced here reveals that the object (Fourth Rockstar):
is found in the AD Connector Space
is found in the Metaverse
is not found in the Azure AD Connector space – no export
Fourth Rockstar is in the OU selected for synchronization. The account is also filtered out because it is not a direct member of the sync group.
In the Synchronization Service Manager, we can see that only the group was exported, but not the account that was added to the group itself. This confirms what the troubleshooting task picked up.
To get Fourth Rockstar synchronized, we would have to add the account as a direct member of the IDRS Sync group.
The case of the changed distinguished name
Let us now cover a scenario where the group used for filtering is moved to an OU that is not selected for synchronization. In this example, I moved the IDRS Sync group from the Sync OU to the VIP OU.
The distinguished name changed from CN=IDRS Sync,OU=Sync,DC=idrockstar,DC=co,DC=za to CN=IDRS Sync,OU=VIP,DC=idrockstar,DC=co,DC=za
If you look at the Synchronization Service Manager, you will notice that the group is removed from the on-premises directory connector and the metaverse. (The VIP OU is not selected for synchronization.)
It may appear that First Rockstar was not removed at first. It is still available in Azure AD at this stage. Remember that this was the only account that was in the OU selected for synchronization AND in the IDRS Sync group (previous blog post).
A synchronization cycle later, the object is deleted.
A quick refresh now shows that the account (First Rockstar) was deleted as a result of moving the IDRS Sync group to an OU that is not in scope of synchronization. This may not be a desired outcome!
Notice the error that clearly states what the problem is when we look at the filter users and devices page in Azure AD Connect. The distinguished name of the group has changed.
For my tenant, I am going with the synchronize all users and devices option to make life easy and align with the recommendation against use of this feature for production deployments.
Summary
I just went through two of the scenarios covering challenges that could be faced when using group filtering. Please note that this feature is currently only intended to support a pilot deployment and should not be used in production.
Most of the services in Azure such as Storage Accounts, Key Vaults or AppService Websites must have globally unique names, where the fully qualified domain name (aka FQDN) for the service uses the name you selected and the suffix for the specific service. For example, for Key Vaults its vault.azure.net and for WebApps its azurewebsites.net
The Azure portal can help you determine the name availability during the service creation, but there’s no built-in PowerShell cmdlet or azure cli command to do so for ARM services (in the old ASM days, we had the Test-AzureName PowerShell cmdlet we could use to check for a classic cloud services name availability).
For scenarios where you have an automated deployment and don’t want the deployment failing because of the name availability, you’d want to have a simple command that returns a true/false boolean value that determines if the name is already taken or not.
Proposed solution
Several of the Azure providers have an API that exposes a checkNameAvailability action that you can use the test the name’s availability. Each provider requires and accepts a different set of parameters, where the most important ones are obviously the name you want to check and the service type.
To get a list of the providers that support the checkNameAvailability action, you can use the following PowerShell command:
Invoking Azure APIs using PowerShell is simple enough, you just need the bearer token, the URI to the API action and the needed parameters for the action. For some of the APIs we need a subscription ID to work with.
The important and main function is Test-AzNameAvailability:
To use it, you first have to get a bearer token, for either the current logged on user or for a service principal using one of the two functions Get-AccesTokenFromServicePrincipal or Get-AccesTokenFromCurrentUser:
Name : martin
Type : ApiManagement
Available : False
reason : AlreadyExists
message : martin is already in use. Please select a different name.
Name : kv
Type : KeyVault
Available : False
reason : Invalid
message : Vault name must be between 3-24 alphanumeric characters. The name must begin with a letter, end with a letter or digit, and not contain consecutive hyphens.
Name : root
Type : ManagementGroup
Available : False
reason : AlreadyExists
message : The group with the specified name already exists
Name : martin
Type : Sql
Available : False
reason : AlreadyExists
message : Specified server name is already used.
Name : storage
Type : StorageAccount
Available : False
reason : AlreadyExists
message : The storage account named storage is already taken.
Name : www
Type : WebApp
Available : False
reason : AlreadyExists
message : Hostname 'www' already exists. Please select a different name.
So in a full script, you could use something like:
$params = @{
Name = 'myCoolWebSite'
ServiceType = 'WebApp'
AuthorizationToken = Get-AccesTokenFromCurrentUser
SubscriptionId = $subscriptionId
}
if((Test-AzNameAvailability @params).Available) {
# Continue with the deployment
}
Closing notes
The checkNameAvailability API is available in several Azure providers, but because of time constraints I implemented the test only for a few of them (ApiManagement, KeyVault, ManagementGroup, Sql, StorageAccount and WebApp), so you are more than welcome to improve it.
Your technical skills are honed to a fine-tooth edge. You’re
a ninja when it comes to Active Directory, SQL, or Exchange. Server crash? You
got this! PowerShell scripting? It’s your superpower! Speaking in front of an audience?
Handling an upset customer? Answer the unanticipated question? Your palms
sweat, your stomach hurts, your head spins. “Someone, anyone, please help!” is
all you can think.
We have all heard people are more afraid of public speaking than they are of death. Have you had to speak with a customer or in front of an audience? While a certain level of anxiety is normal, you can learn how to master the art of communication whether it’s one-on-one, in meetings, or in front of an audience. Read on to learn how to teach your butterflies to fly in formation!
Yes, you can!
You may be saying to yourself, no way. Not me. Not possible.
I truly would rather die than give a presentation or talk to someone I don’t
know. Let me tell you a short story to encourage you that it is possible to
overcome your fears.
Many years ago, I met a man, we’ll call him Jim. Jim was so
terrified of talking to people he could not even say hello when he was
introduced to someone. At his wife’s urging, they joined a group called
Toastmasters International. Jim’s first goal was to stand up in front of the
group for 30 seconds. Easy you say. Not for Jim. His anxiety was so high, it
took him several months just to hit the 30 second mark.
Jim then set a second goal. He wanted to be able to say
“hello” to his audience. Again, it took several months before Jim was able to
confidently utter the words, “Good evening fellow Toastmasters.”
Fast forward several years. I was working a local event for
the Chamber of Commerce. And who did I see at the event? Jim! Not only was Jim
at the event, he came up to me and said hello. He let me know that because he
learned to overcome his fear of public speaking, he now had his own business
selling eye glass frames and was doing well with it!
If Jim was able to overcome his fears, I know you can too!
In this series of blogs, I am going to teach you the basics of public speaking,
provide resources to assist you, and help you build the confidence you desire
to take control of the butterflies!
Interpersonal Communication
Before we dive into public speaking, AKA delivering MIP to
an audience, let’s look at interpersonal communication.
Interpersonal communication is simply a conversation between
two people. It can be positive or negative. Positive conversations might
include talking with a co-worker about your weekend, a casual conversation with
a customer, or meeting someone new. While these can be stressful situations, it
is typically the difficult conversations that cause us high levels of anxiety.
These conversations might include disputing a charge on a bill, a discussion
with your auto mechanic about what is really wrong with your car versus what he
tells you is wrong (and how much it will cost to repair it!), being interviewed
for a new position, or dealing with an unhappy customer.
Each of these conversations can be challenging and
stressful. However, if you have the necessary skills, handling uncomfortable
conversations will no longer cause you to sweat. And, once you master these,
you will be able to master the art of public speaking, aka MIP delivery, with
ease!
ToastmastersLevels of Conversation
Every relationship starts with conversation. Toastmasters
International (TI) defines four levels of conversation. Level One is
small talk – talking about the weather, maybe a concert or play, current events,
etc. At this level, the conversation remains neutral and does not typically
delve into personal topics or opinions.
Level Two is the fact finding and disclosure level.
Here we are starting to build enough trust to disclose a few personal facts
about ourselves. May we discuss our occupations, if we are married or single,
our kids, or our hobbies. At this level, we are looking for common ground to
see if we wish to continue to invest in a relationship with the other person.
Level Three raises the stakes. We are feeling comfortable and positive with the other person and our conversations. This may occur at the initial meeting or at later subsequent meetings. You begin to express personal opinions on different topics and may discuss different viewpoints. You are opening yourself up to the other person.
Finally, you reach Level Four. The relationship is
deepening and there is a strong comfort level with this person. You share
similar views and find you have enough in common to want to continue the
relationship. Several encounters are usually needed to reach this level. Topics
are now of a more personal nature. You may disclose an issue you are having
with your spouse, kids, or at work and seek advice, discuss concerns you both
have, or other topics you would not disclose or discuss with a stranger.
Not all conversations/relationships will make it to Level
Four. Nor should they. In the business environment, you will most likely only
speak with your customer at Level One or Level Two. If you move through the
levels too quickly, you could overwhelm the other person making them shut down
to whatever it is you are sharing or the message you wish to communicate. Your
customer probably doesn’t care about the argument you had with your child, or
your neighbor who keeps letting their dog destroy your yard. Getting too
personal in the workplace can diminish your professionalism and detract from your
credibility with your customer.
What we’ve learned
In this post, we learned you are not alone in your fear of
public speaking. We also learned you can overcome this fear! We learned about Toastmasters’
four levels of interpersonal communication. This will allow us to tailor our
conversations to the environment in which we find ourselves, as well as giving
us guidelines on how fast to move when desiring to build a relationship or
rapport with another person. It also allows us to see we should not strive to
engage at all levels with all people in which we find ourselves in
conversation.
Next time…
In my next post, I will introduce you to tips and tricks for dealing with those difficult conversations we all must have at one time or another including “does it really help to picture the audience in their underwear?”
If you have guest access to multiple directories then switching is fairly easy. You simply click on your username, click switch directory and then choose your directory. Below is a simple example. But what happens when you try to switch to these directories in other portals like the Desktop Analytics portal (devicemanagement.portal.azure.com)? In my experience it reverted me back to my default directory with no option to change directories.
Example 1 : Switching Directories in Azure Portal
Click Switch DirectorySelect the DirectoryEasy right?
Example 2 : Switching Directories in Device Management Portal
Navigate to Device Management portalAs you can see the directory has been changed to my default. Which is not what I wanted
The Resolution
As you can see it seems to be reverting to the default directory. But I need access on my other directory.
Get the domain of the directory you would like to navigate to.
Add this directory name in the URL as per below example
3. As you can see now you are logged in with the correct directory
As always, I hope this has been informative and feel free to correct me in any steps.
Windows Server 2019 has a lot of additional capabilities that can be added. Those features are easily added with the Add-WindowsCapability PowerShell cmdlet. When adding a capability it pulls from either the Internet or a WSUS server. Sometimes the capability needs to be added in an offline environment where there is no Internet and the WSUS server is non-existent or does not have the package. In that case the Windows Server 2019 Features On Demand (FOD) ISO is needed and the -source parameter then can be used to add the capability. The Features On Demand ISO can be downloaded from MSDN or my.visualstudio.com.
While there is a Windows Server 2019 Features On Demand ISO it does not contain all the capabilities, such as OpenSSH server. That capability is on the Windows 10 Features On Demand ISO. However the Windows 10 Features On Demand ISO cannot be used on a Windows Server 2019 OS. There is a little work around though.
For this work around you will need both the Windows Server 2019 Features On Demand disc and the Windows 10 Features On Demand disc. Once you have both discs / ISOs downloaded follow these simple steps.
Extract the entire Windows Server 2019 Features On Demand ISO to a local directory on the server (e.g. C:\FOD).
Open up the Windows 10 Features On Demand ISO and copy the following cab files to the directory with the extracted Windows Server 2019 Features On Demand files.
This is a continuation of a series on Azure AD Connect. I recently covered using domain/OU and group filtering options that are available in Azure AD Connect to help control which objects are synchronized to Azure AD. I also took a closer look in group filtering, which is not recommended for use in production. Another filtering mechanism I would like to cover before moving on to another topic is attribute-based filtering. This is, however, not something we achieve through the Azure AD Connect wizard that we have been using throughout the series, but the Synchronization Rules Editor. A full list of related blog posts is provided in the summary section below.
Attribute-based filtering
We now know that filtering using a security group is not recommended as pointed out in the previous blog post. What other options do we have — if we, say — wanted to filter out (exclude) some of the user objects residing in an OU selected for synchronization? Attribute-based filtering! The Azure AD Connect sync: Configure filtering document has finer details on attribute-based filtering. I’ll just go through an example to see how this feature could be leveraged to filter objects based on attribute values.
Environment setup
To simplify demonstration of this feature, I focus on only one of the domains I have in my test AD forest – idrockstar.co.za. The VIP OU in that domain is already selected for synchronization as shown below.
I created two user accounts in the VIP OU:
First VIP – should be synchronized to Azure AD
Second VIP – should NOT be synchronized to Azure AD (cloud filtered)
I further updated Second VIP‘s extentionAttribute15 attribute have a value of NoSync. The idea is to apply negative filtering based on this attribute, but more on this is covered in the next section.
Applying attribute-based filtering
The tool for this job is the Synchronization Rules Editor. This tool can be used to view, edit and/or create new synchronization rules that control attribute flows.
Once the tool is open, new rules can be added by clicking the add new rule button. Note that the direction (inbound) was already selected by default. I highlight this as there is also an option for outbound filtering, which I don’t cover in this post. I click the (add new rule) button to start the wizard.
Clicking the add new rule button opens up a create new inbound synchronization rule wizard that is needed to apply the negative filter (do not synchronize objects that meet the critiria). I provide the following information on the description page and click next to proceed:
Name: this should describe the purpose of the rule (visible in the default view of Synchronization Rules Editor)
Description: more details on what the rule aims to achieve (optionally used to provide more information)
Connected System: this is the on-premise directory – idrockstar.co.za in my case
Connected System Object Type: target object type is user in this example
Metaverse Object Type: user objects are presented as person type in the metaverse
Link Type: join is selected by default – I leave this unchanged
Precedence: defines which rule wins in case of a conflict when more than one group contribute to the same attribute. The rule with the lower precedence number (higher priority) wins.
The rest of the fields are not necessary for this exercise.
On the scoping filter page, I click add group, followed by the add clause button and specify the value of NoSync for extentionAttribute15.
I click next, and next again to skip the join rules as they are not required for our task. On the transformations page, I click the add transformation button and complete the form as follows:
FlowType – Constant
Target Attribute – cloudFiltered
Source – True
I leave everything else default.
To finish off, I click add at the bottom of the page (not shown in the screenshot). A warning message stating that a full (initial) synchronization will be run on the directory during the next synchronization cycle is displayed. Be prepared for this when you apply this feature in your environment. I click OK to dismiss the dialog box.
Looking back at the main Synchronization Rules Editor window, we can confirm that the new rule was added.
The effect of attribute-based filter
Looking at the Troubleshooter that we covered here, we see that:
the Second VIP user object is found in the AD Connector Space
the Second VIP user object is found in the Metaverse, but
the Second VIP user object is not found in the Azure AD Connector space
The Connector Space Object Properties windows in the Azure AD Connect Synchronization Service shows that Second VIP has been deleted (it had initially been exported).
The Metaverse Object Properties window confirms that the cloudFiltered attribute was indeed set to the value of true by the rule we created. (The connectors tab would also reveal that the object is only present in the on-prem AD connector and not in the Azure AD connector.)
Finally, looking at Azure AD confirms that Second VIP was filtered out and is not available in the Azure AD user list. Only First VIP is showing.
Summary
This was a third blog post on filtering, which covered attribute-based filtering in Azure AD Connect. This feature provides a way to filter objects based on attribute values. Below is a list of references that provide a lot more detail if required. I have also provided a list to all previous Azure AD Connect-related blog posts below.
This is an into to a multi-part series on building portable labs.
Boldly Going
One thing I have found invaluable throughout my career has been the ability to maintain a decent lab environment, something that has been an ongoing struggle over the years. Early on it was all about the hardware. One of my earliest labs grew to roughly 15+ Frankenstein mini-tower systems I had cobbled together into a learning tool for learning to work with Novell/Windows NT, Banyan Vines, TCP/IP (replacing IPX/SPX at the time) plus a whole bunch of old-timer stuff I won’t bore you with here.
Over time, technology changed and my lab with it. KVMs let me remove several monitors and reduced hot-swapping problems. Combined server roles like Small Business Server (SBS) let me reduce the number of lab-production machines to basically one box. Virtualization with VMWare got me down to 4 boxes total and when Hyper-V finally arrived with WS-2008, I was able to run my lab with 2 Microsoft (MS) Hypervisors boxes running beefed up RAM and HDDs. This lasted a few years until the hardware finally succumbed to its age and died on me. I decided to try rack-mount systems (I went with used because I am WAY too cheap for new) and am still engineering that particular solution. If my free time allows for it, I may try documenting that in another series.
Setting a Course
What this series will focus on is the portability of lab environments and how to work with them. While travelling for work and I’ve observed this as an issue for myself and many of the people I work with. Like many in the industry, I spend a lot of time on the road and often do not always have access to my permanent lab environment. I needed a local solution I could easily keep with me and was self-contained, quickly configurable and easily shared with team members in a pinch. I also wanted lightweight since my bag was heavy enough already.
Basic Hardware Setup
I write this with the understanding our work PCs (in my case a Surface Book Pros) are fixed on RAM and Drive-space and the options to change them limited at best, but as long as your RAM is at least 16GB, the right external drive solution will fix the drive-space limits (usually 256/512GB) inflicted on our machines.
NOTE: 8GB of RAM would work but you would be limited to 1 or 2 VM’s running max.
After much research and testing various drives on my machine I settled on the SanDisk 2TB Extreme Portable External SSD (USB-C, USB 3.1)
With a capacity of 2 Terabytes SSD, it has more than enough space to hold my Lab VMs as well as any ISOs I may need to build a new lab. It comes in a rugged case and is amazingly light. It has a USB-C connection and the cable comes with a USB-3.1 adapter. I found its performance on both ports to be exemplary. I have had many VMs running concurrently and the combined SSD/USB-3 has never been an issue on my machine. My main limiting factor has always been the RAM. I’ve used this drive on my 32GB laptop with no discernable performance degradation.
For secondary storage I added an SD (micro in this case) to each of my machines to house lab configs, scripts, extra ISOs or other files I might need in a pinch if I happened to be caught without my LabDrive. I went with the SanDisk Ultra 128GB microSDHC UHS-I card (it came with a SD adapter):
It was the largest available at the time and is designed for photography so it is one of the faster SDs out there (98MB/s).
Setting the Environment (5 Years?!?)
I hope you find the hardware recommendations useful in your lab endevours. I do NOT recommend running an external lab drive on any USB port less than USB-3, the performance hit is too crippling to the lab’s performance. A USB-3 HDD can be used but for better, more consistent performance I would stick with SSDs.
In an upcoming blog, I will cover tips for setting up the environment and tweaking configurations. I will also cover enabling Hyper-V and some changes to defaults you need to be wary of when working with a lab drive. I will also demonstrate PowerShell vs GUI configurations as automation is the key to rapidly deploying a functioning lab.
There is an excellent tutorial here by Jaromir Kaspar that goes into rapidly deploying labs in 2016 and Windows 10. I highly recommend it, especially if you’re craving better automation.