Quantcast
Channel: Secure Infrastructure Blog
Viewing all 408 articles
Browse latest View live

System Center Configuration Manager – Powershell Query .MIF .SID and .SIC files in inboxes

$
0
0

The Issue

Is there a script that can ‘read’ through the Configuration Manager inboxes ( \Microsoft Configuration Manager\inboxes\auth\sinv.box\BADSinv) and can output/return a list of computer names which failed their software inventory?

There was a similar query that does this for Hardware Inventory by Querying *.MIF files

$ConfigMgrBoxPath = “C:\Program Files\Microsoft Configuration Manager\inboxes\auth\dataldr.box\BADMIFS”
Get-ChildItem -Path $ConfigMgrBoxPath -Include *.MIF -Recurse -Force -ErrorAction SilentlyContinue | ForEach-Object {
    $File = $_.FullName ;
    try {
        (
            Get-Content -ReadCount 1 -TotalCount 6 -Path $_.FullName -ErrorAction Stop  |
            Select-String -Pattern “//KeyAttribute<NetBIOS\sName><(?<ComputerName>.*)>” -ErrorAction Stop
        ).Matches.Groups[-1].Value
    } catch {
        Write-Warning -Message “Failed for $File”
    }
} | out-file -FilePath “c:\test\output.txt”

To read more about it click the link ( https://blogs.technet.microsoft.com/scotts-it-blog/2015/04/29/identifying-and-counting-computers-sending-badmif-files/ )

The Investigation

The issue with the above script is that Hardware Inventory (*.MIF) files are much better structured than Software Inventory files (*.SID, *.SIC)

MIF

Compared to

So I modified the original script to try and query *.SID files, but it failed, even after trying to learn String Patterns and Queries in REGEX ( https://regexr.com/)

The closest I got was the below, but this was still not good enough because when there are more than one set of dashes it doesn’t show the correct computer name.

The Solution

The final solution was simplifying the PowerShell script to the below

Removing the ‘#’ will run it and create a notepad called output.txt (make sure to specify the path)

$ConfigMgrBoxPath = “C:\Program Files\Microsoft Configuration Manager\inboxes\sinv.box\BADSinv”
Get-ChildItem -Path $ConfigMgrBoxPath -Include .SID,.SIC -Recurse -Force -ErrorAction SilentlyContinue | foreach {(Get-Content $_).Split("0″)[12]} | out-file -FilePath “c:\test\output.txt”


Temporary Post Used For Theme Detection (1e24023d-f807-4450-8da9-c2b42259f1e0 – 3bfe001a-32de-4114-a6b4-4005b770f6d7)

$
0
0

This is a temporary post that was not deleted. Please delete this manually. (bded2beb-c93f-47c4-bda0-0af8a1610fc2 – 3bfe001a-32de-4114-a6b4-4005b770f6d7)

Importance of the Microsoft Product lifecycle dashboard – Keeping your environment Supported

$
0
0

The following post was contributed by Meriem Jlassi, a PFE working for Microsoft

Introduction

As a Premier Field Engineer (PFE) at Microsoft, I get asked by a lot of customers about whether products are still supported, are they close to end of life or when do I need to plan upgrades. And before the release of the Configuration Manager Product Lifecycle Dashboard it was more of a manual task of checking the Microsoft Lifecycle Policy site and confirming the end of mainstream support or extended support dates, but that still didn’t give you a list of systems in your environment that are reaching end of support.

And maybe you just never knew you still had this version of software in your environment.…

Solution

Beginning with version 1806, you can use the Configuration Manager product lifecycle dashboard to view the Microsoft Lifecycle Policy. The dashboard shows the state of the Microsoft Lifecycle Policy for Microsoft products installed on devices managed with Configuration Manager.

You can now start pro-actively planning for product upgrades because the dashboard displays what needs to be replaced within the next 18 months.

Prerequisites

To see data in the product lifecycle dashboard, the following components are required:

  • Internet Explorer 9 or later must be installed on the computer running the Configuration Manager console.
  • A reporting services point is required for hyperlink functionality in the dashboard.
  • The asset intelligence synchronization point must be configured and synchronized. The dashboard uses the asset intelligence catalog as metadata for product titles. The metadata is compared against inventory data in your hierarchy. For more information, see Configure asset intelligence in Configuration Manager.

Configuration Manager Product Lifecycle Dashboard

Screenshot of the product lifecycle dashboard in the console

How can I tell which computers are running these older versions of SCCM, Windows or SQL Server? I can drill-through to another report by simply clicking on the hyperlinks found in the Number in environment column. Doing this brings me to the, Lifecycle 01A – Computers with a specific software product report.

There are also additional reports that can be utilized to allow customers to export the data out of SCCM:

  • Lifecycle 02A – List of machines with expired products in the organization: View computers that have expired products on them. You can filter this report by product name.
  • Lifecycle 03A – List of expired products found in the organization: View details for products in your environment that have expired lifecycle dates.
  • Lifecycle 04A – General Product Lifecycle overview: View a list of product lifecycles. Filter the list by product name and days to expiration.
  • Lifecycle 05A – Product lifecycle dashboard: Starting in version 1810, this report includes similar information as the in-console dashboard. Select a category to view the count of products in your environment, and the days of support remaining.

So what’s New since its release!!

Added in the latest version of SCCM 1902 is information for installed versions of Office 2003 through Office 2016. Data shows up after the site runs the lifecycle summarization task, which is every 24 hours.

Configuration Manager Product Lifecycle Dashboard – SCCM 1902

Product LifeCycle - Office

 

Some might ask – but what if I don’t have Configuration ManagerNyah-Nyah

That’s where “Azure Monitor logsformerly named Azure Log Analytics” could be used to provide a Dashboard to help with managing supportability of your environment.

Prerequisites:

  • Azure Tenant
  • Azure Subscription
  • Log Analytics Workspace
  • Monitoring Contributor role (at least)
  • Update Management Solution Enabled (no need for Deployment schedule)
  • Microsoft Monitoring Agent:
    • Direct Agent or
    • Log Analytics Integrated with SCOM or
    • Log Analytics Gateway

This will allow you to start using Kusto query language to find products which are end of support based on the Microsoft Lifecycle Policy site  information and create your dashboard based on specific software.

Example Query: Update | where Product contains “Windows Server 2008 R2” | distinct Computer

End of Life Support

Conclusion

The new product lifecycle dashboard will give you an indication of products that are past their end-of-life, products that are nearing end-of-life and also general information about the products that have been inventoried to help you manage the environment in a more proactive way and plan for upgrades.

I would have to say probably a underutilised capability that will help customers maintain an optimal environment.

So if you have Configuration Manager then it’s all ready to go but if you looking at the Azure Log Analytics option then you could start here to get going – https://docs.microsoft.com/en-us/azure/azure-monitor/log-query/log-query-overview.

If you are a Microsoft Premier customer you can reach out to your TAMs for delivery options available!!

How to associate an account to SCOM unit monitor

$
0
0

In this blogpost, I’ll run through an example of how to associate a Run as Account to script monitor.

In SCOM, the way to delegate permissions is by setting a profile and an account that is linked to that profile, we will create the profile in the Management Pack and then attach the profile to the monitoring workflow and configure the account in the profile.

The account in Run As account has special permissions for query database for example.

1. Write the PS / VB Script you with to use for monitoring e.g. Monitor SQL server database query [An example is in the Management Pack attached to this article]
2. Debug custom scripts on target server (debug vb script with Cscript command line tool) with an account that has permissions, and make sure the result is fine.
3. Add new Unit Monitor, then add the script and their properties expressions.
4. Create new Run as Profile in this monitor management pack.
5. Export the Management pack contain the Script and the new run as profile.
6. Open the MP with your preferred editor.
7. Copy “RunAsProfile_ID” from Secure Reference section:

<SecureReference ID=”RunAsProfile_1905759fda4f4af2b2a8346fa2d7610a

8. Add RunAs parameter to unit monitor line:

Unit monitor without RunAs parameter:

<UnitMonitor ID=”Unit.Monitor” Accessibility=”Internal” Enabled=”true” Target=”Windows!Microsoft.Windows.Computer” ParentMonitorID=”Health!System.Health.AvailabilityState” Remotable=”true” Priority=”Normal” TypeID=”Custom.MyPSTransactionMonitorType.UnitMonitorType” ConfirmDelivery=”false”>

Unit Monitor with RunAs parameter:

<UnitMonitor ID=”Unit.Monitor” Accessibility=”Internal” Enabled=”true” Target=”Windows!Microsoft.Windows.Computer” ParentMonitorID=”Health!System.Health.AvailabilityState” Remotable=”true” Priority=”Normal” TypeID=”Custom.MyPSTransactionMonitorType.UnitMonitorType” ConfirmDelivery=”false” RunAs=” RunAsProfile_1905759fda4f4af2b2a8346fa2d7610a”>

9. Save and import the updated Management Pack.

10. Add ‘Run as Account’ to this ‘Run as Profile’.

——————————————————————————————————–

To ensure that the process is run with the defined account:

  • Add “write to log” function that write the account name running this script in Agent Operations Manager event log:

Add “Log script event” to VB Script monitor:

Set objNet = CreateObject(“WScript.NetWork”)

Call objAPI.LogScriptEvent(“Script_Monitor.vbs”,5555,2, objNet.UserName)

Add “Write event log” function to Powershell script monitor:

Write-EventLog -LogName “Operations manager” -Source “Health Service Script” -EventId 5555 -Message “ Script running under account – $(whoami)”

  • Open task manager in the target server and verify you have one “monitoring host” process is running under this user account.

How to create a new SCOM class and subclass

$
0
0

SCOM Admin needs to know the basic structure of Management packs and knowledge about classes and objects, what are the differences between the classes, and what is the projection of choosing a class.

Management packs provided by the products companies like Microsoft for Active Directory Exchange, and so forth, do the work for us, by providing the classes and discoveries for monitoring targets, in order to adapt the good method for the custom management packs, we have to write classes of our own.

When we need to create new custom monitors you must select a Target, Target is a class that host objects, the monitoring that we create will apply to all objects in their classes. For example, Windows Computer contains all the Computer objects, you can override on a group or object.

When we decide which target to set up, and you need to enable the monitor on part of objects, it is wrong to think that all the monitors can be manipulated on Windows Computer Class or on any other existing class, the main effect is on system performance the unmonitored monitors on windows computers objects are cause slow down the system in the future.  

Example of side effects will arise when we will try to manage and display the services state, we couldn’t select the Windows Computer Object in the dashboards, because not all monitors necessarily belong to this service, and it’s will have effect on this service.

Therefore, there is a need to create classes then identify the servers on which the services are based, and on which we will define the monitoring.

Some of the tools for creating classes are MP Author and Visual Studio.

Kevin Holman has written a library that contains numerous examples of using VSAE to create class: https://gallery.technet.microsoft.com/SCOM-Management-Pack-VSAE-2c506737    

But the discovery classes in the library are based on Local Application that represent only one base class type and the main roles on server and not the components of this role.

——————————————————————————————————

Local Application and Application Component Base Classes types and their differences:

Windows LocalApplication  / Unix LocalApplication

Application often installed with others on the local computer

  • Hosted by Windows Computer or Unix Computer
  • Automatic Health Rollup to Computer

Windows ApplicationComponent / Unix ApplicationComponent

Component of a local application or computer role

  • Unhosted; Create your own relationship

Local Application classes, represent a defined role installed on the server, and are hosted by default under the Windows / Linux Computer class, therefore computer automatically inherit the State, and if so, when a monitor in Child goes to Critical, the Parent is also colored and changes its state.

When we need to monitor components of this Role, and we want to represent them under it, we need to create sub-classes based on Application component, but since the Unhosted class definition, is required to define a relationship between the parent class and the sub class that we will now going to create.

Example of Parent and child class:

In the MP attached, there is a class based on Local Application that be the main class for this example (I used the Kevin Holman Fragment – Class.And.Discovery.Script.PowerShell.mpx).

Now we will create an Application Component sub-class that will link to the parent Local Application to which we want to associate to.

It is important to understand that the state will not affect the parent unless we want it to [hence the continuation].

Steps:

  1. Add new Sub-class base on Windows!Microsoft.Windows.ApplicationComponent:

<ClassType ID=”DEMO.ApplicationComponent.Class” Abstract=”false” Accessibility=”Public” Base=”Windows!Microsoft.Windows.ApplicationComponent” Hosted=”true” Singleton=”false”>

<Property ID=”<PropertyA>” Key=”false” Type=”string”/>

<Property ID=”<PropertyB>” Key=”false” Type=”string”/>

2. Create the relationship to the Local Application main class:

<RelationshipType ID=”LocalApplicationHostsApplicationComponent” Base=”System!System.Hosting” Accessibility=”Public”>

<Source ID=”LocalApplication” Type=”DEMO.LocalApplication.Class”/>

<Target ID=”ApplicationComponent” Type=”DEMO.ApplicationComponent.Class”/>

3. Add discovery targeted to Windows Operating System class, to discover the components [Script Discovery, you can use any discovery process according to the application setting]:

<Discovery ID=”DEMO.ApplicationComponent.Class.Discovery” Target=”Windows!Microsoft.Windows.Server.OperatingSystem” Enabled=”true” ConfirmDelivery=”false” Remotable=”true” Priority=”Normal”>

<Category>Discovery</Category>

<DiscoveryTypes>

<DiscoveryClass TypeID=”DEMO.ApplicationComponent.Class“>

<Property PropertyID=”PropertyA”/>

<Property PropertyID=”PropertyB”/>

</DiscoveryClass>

</DiscoveryTypes>

<DataSource ID=”DS” TypeID=”Windows!Microsoft.Windows.TimedPowerShell.DiscoveryProvider”>

<IntervalSeconds>86400</IntervalSeconds>

<SyncTime />

<ScriptName>DEMO.ApplicationComponent.Class.Discovery.ps1</ScriptName>

<ScriptBody>

<Discovery Script body>

</ScriptBody>

<Parameters>

<Parameter>

<Name>SourceID</Name>

<Value>$MPElement$</Value>

</Parameter>

<Parameter>

<Name>ManagedEntityID</Name

< Value> $Target/Id$</Value>

</Parameter>

<Parameter>

< Name> ComputerName</Name>

< Value> $Target/Host/Property[Type=”Windows!Microsoft.Windows.Computer”]/PrincipalName$</Value>

</Parameter>

</Parameters>

<TimeoutSeconds>120</ TimeoutSeconds >

</DataSource>

</Discovery>

3. Import the Management Pack – Local Application class and a Sub-class based on Application Component are created based on the discovery condition

NOTE – When the child object goes to Unhealthy the father by default remains Healthy

To add interdependence, you need to add Dependency Monitor and select Object (Hosting)

Now when the child object goes to Unhealthy the parent changed also to Unhealthy

Manage SCOM Alerts Using REST API

$
0
0

In this blog post, I will walk through how to get alerts from SCOM using REST API.

REST API is applicable from 1801 version which support a set of HTTP operations, in this guide I’ll explained how to filter the alerts to get only the scope you need.

In the examples in the following article – https://docs.microsoft.com/en-us/rest/operationsmanager/ demonstrated only on how to make calls to use a “custom widget” in the new HTML web console, in this guide I’ll explain how to get the alerts by REST API to forward it to another systems for example, by Powershell script.

All available operations you can call, is listed here – https://docs.microsoft.com/en-us/rest/api/operationsmanager/data 

Powershell Script – output only new critical alerts:

# Set header and the body

$scomHeaders = New-Object “System.Collections.Generic.Dictionary[[String],[String]]”

$scomHeaders.Add(‘Content-Type’,’application/json; charset=utf-8′)

$bodyraw = “Windows”

$Bytes = [System.Text.Encoding]::UTF8.GetBytes($bodyraw)

$EncodedText =[Convert]::ToBase64String($Bytes)

$jsonbody = $EncodedText | ConvertTo-Json

#Authenticate

$uriBase = ‘http://<Your SCOM MS>/OperationsManager/authenticate’

$auth = Invoke-RestMethod -Method POST -Uri $uriBase -Headers $scomheaders -body $jsonbody -UseDefaultCredentials -SessionVariable websession

# Add Criteria – Specify the criteria (such as severity, priority, resolution state, etc.)

# Display Columns – Specify the columns which needs to be displayed.

$query = @(@{ “classId”= “”

                  # Criteria output the critical new alerts

                    “criteria” = “((Severity = ‘2’) AND (ResolutionState = ‘0’))”

                    “displayColumns” =”severity”,”monitoringobjectdisplayname”,”name”,”age”,”repeatcount”,”lastModified”

 })

$jsonquery = $query | ConvertTo-Json

$Response = Invoke-RestMethod -Uri “http:// <Your SCOM MS> /OperationsManager/data/alert” -Method Post -Body $jsonquery -ContentType “application/json” -UseDefaultCredentials -WebSession $websession

$alerts = $Response.Rows

$alerts


#Using Powershell script above and set the query without criteria, will retrieve All alerts

$query = @(@{ “classId”= “”

                  # Get All Alerts

                    “displayColumns” =”severity”,”monitoringobjectdisplayname”,”name”,”age”,”repeatcount”,”lastModified”

 })


#In “DisplayColumns” value, you can add any alert property, for example add alert description:

$query = @(@{ “classId”= “”

            “criteria” = “((Severity = ‘2’) and (ResolutionState = ‘0’))”

                      “displayColumns” = “id”,”name”,”description”

})

#Id, Name, and Description:

Field Notes: Azure Active Directory Connect – Custom Installation with Pass-Through Authentication & a remote SQL Server

$
0
0

Integrating your on-premises directories with Azure Active Directory makes your users more productive by providing a common identity for accessing both cloud and on-premises resources.  Azure Active Directory Connect is the Microsoft tool designed to meet and accomplish your hybrid identity goals.  It provides features such as password hash synchronization, pass-through authentication, federation integration, and health monitoring.

I covered the express installation option in my previous post (Field Notes: Azure Active Directory Connect – Express Installation).  In this follow-up post, I cover the custom installation option.

Custom Installation

Why would you want to customize the installation of Azure AD Connect, as it’s just easier to go with the express option and provide a couple of credentials through the installation and configuration wizard?  Options…  One requirement could be environments having multiple forests that are synchronized to a single Azure AD tenant.  Another could be having to make choices that are not covered by the express option.  In this example, I use a remote SQL Server installation and enable pass-through authentication.

Getting started

I already obtained the latest installer at https://aka.ms/aadconnect.  The Azure Active Directory team at Microsoft regularly updates Azure AD Connect with new features and functionality.  Launching the installer presents the Welcome To Azure AD Connect screen.  Following that is where a decision whether to go express or custom is made.  I select the customize option.

Custimizing the Installer

Install required components

The first set of customization options include the use of an existing SQL Server.  You may need to use a full-blown SQL Server as the SQL Server Express that is installed by default on the local server has some limitations.  SQL Server Express has a 10 GB size limit that enables you to manage approximately 100 000 objects.  Microsoft Azure SQL Database is currently not supported as a database.  I am also using a managed service account (versus a normal domain account) in my environment as I am connecting to a remote SQL Server.

Specify an existing SQL Server and MSA

A group managed service account is recommended if you use a remote SQL server.

I am not specifying a custom installation path here as I am happy with the default “C:\Program Files\Microsoft Azure AD Sync”.  I also stick to the local groups that are created by default on the Azure AD Connect server.

Local Sync Groups

User sign in

Here’s another reason you may want to customize the installation – user sign-in options.  Password Hash Synchronization is enabled by default with the express option but a different sign on method may be required.  In this setup, I go with Pass-through authentication and also enable single sign-on for corporate desktop (intranet) users.  With pass-through authentication, credentials are validated by on-premises domain controllers.

User Sign-in Options

It is recommended that you have a cloud only company administrator account so that you are able to manage pass-through authentication in the event of an on-premises failure.

Connect to Azure AD

The Connect to Azure AD screen is the same as what we saw in the express installation.  I supply credentials of a global administrator Azure AD account.

Connect to Azure AD

I’m prompted to authenticate using my mobile device as I have multi-factor authentication (MFA) enabled the Global Administrator account I am using.

MFA Challenge

Connect your directories

An on-premises Active Directory account with sufficient permissions is required for periodic synchronization.  It is recommended that you let the wizard create a new account, and credentials of an account belonging to the Enterprise Admins group is required for this.  Otherwise, an existing account with sufficient permissions could be used.

On-Premise AD Credentials

Once credentials have been supplied, the directories need to be added as shown below.  I have added one forest with two domains, which we will see the domain and OU filtering section.

Connect On-Premise Directories

Azure AD sign-in configuration

To sign-in to Azure AD with the same credentials as what we have on-premises, a matching Azure AD domain is required.  I have already verified my domain in Azure AD and setup the UPN suffix in my forest.  I leave the default and recommended userPrincipalName as the attribute to use for logging on to Azure AD.

Active Directory UPN Suffixes

Users will not be able to sign-in to Azure AD with on-premises credentials if the UPN suffix does not match a verified domain.  In this environment, I cannot sign-in to Azure AD with user@east.idrockstar.co.za as an example.

Domain and OU filtering

My on-premises environment consists of two domains in a single forest.  The customized installation path offers the granularity to select which domains and/or containers you want to include in the synchronization scope.  Some containers are essential for the functionality and should not be unselected.  One example is the ForeignSecurityPrincipals in a multi-forest environment with trusts.  Another, which is not depicted below, is the RegisteredDevices organizational unit if the device write back feature is enabled.

Domain and OU Filtering

Uniquely identifying users

We are next presented to select how users should be identified in the on-premises directories.  What I have highlighted below is a common scenario (users are presented only once across all directories).  Options presented here also cater for multiple forests where you could have linked mailboxes in another forest for example.  I have a single forest, so I proceed with the default.  I also let Azure manage the source anchor.  The sourceAnchor attribute is defined as an attribute immutable during the lifetime of an object.  It uniquely identifies an object as being the same object on-premises and in Azure AD.  The attribute is also called immutableId and the two names are used interchangeable.

Identifying Users

Filter users and devices

The synchronizing all users and devices option is recommended for production environments.  Note that using group filtering is intended for pilot deployments.  Nested groups are not supported – objects you wish to synchronize would have to have direct membership.

Group Filtering

Optional features

This page provides optional functionality that may be required by some organizations  Examples include password hash synchronization and password writeback, which I select.  By the way, we opted to use pass-through authentication earlier during the user sign-in page.  It is recommended to also enable password hash synchronization here, especially since features such as Azure AD Identity Protection require it.

Select Optional Features

Enable single sign-on

I don’t have to re-enter a domain administrator account required to configure the on-premises forest for use with single sign-on as the account I am using already has necessary permissions to create the required computer object in the on-premises Active Directory.

Enable Single Sign-On

Ready to configure

Almost there!  Everything is ready – proceeding with the installation:

  • configures the synchronization services on the local computer
  • installs the Microsoft Azure AD Connect Authentication Agent for pass-through authentication
  • enables pass-through authentication and single sign-on
Ready to Configure

Clicking install completes the installation.

Summary

A quick look in the portal confirms that the installation succeeded with both password hash sync and pass-through authentication enabled.  It is recommended to have 3 or more pass-through authentication agents installed for high availability.

AAD Sync Status

The express install option is used in most deployments, but there may be a requirement to use custom installation.  In this blog post, I covered a scenario connecting to a remote SQL Server, as well as using pass-through authentication as a sign-in option.

References

Till next time…

Create Azure monitor Alert based on Custom metrics

$
0
0

Azure Diagnostics Extension provides the monitoring and diagnostics capabilities on a Windows-based Azure virtual machine.

WAD enable monitoring on Azure guest VM, with capabilities to use built-in metrics and to add a new custom metric that are not collected by default.

This can be done by enable “Diagnostics Settings” on azure virtual machine, select “Enable guest level monitoring”, you can use Basic collection or to choose ‘custom’ to configure custom metric ‘performance counter’.

For example, add new custom metric: free space of C drive: \LogicalDisk(C:)\% Free Space

NOTE: by deselecting the listed performance counters, you can remove the default counters.

In Metrics blade you can see the “C drive free space” metric on selected VM:

Create an Alert based on VM custom metrics

Metric alerts in Azure Monitor provide a way to get notified when one of your metrics cross a threshold, Click Select target, in the context pane that loads, select a target resource that you want to alert on. Use Subscription and Resource type drop-downs to find the resource you want to monitor. You can also use the search bar to find your resource.

If the selected resource has metrics you can create alerts on, Available signals on the bottom right will include metrics.

Once you have selected a target resource, click on Add condition.

You will see a list of signals supported for the resource, select the metric you want to create an alert on.

“WAD Metrics” are not available in the Alert rule on a Virtual Machine resource under condition:

List of available metrics are enabled by default in Azure monitor – https://docs.microsoft.com/en-us/azure/azure-monitor/platform/metrics-supported#microsoftcomputevirtualmachines

Because all the information is stored in Log Analytics, to be able to select the “Metrics” on the condition section, There are two options, by typing query directly in the condition, or by saving the query and select it in the condition:

  • 1. Create Alert rule on workspace source, set condition “Custom log search” type the query showing the metric result:

In this example I added the Logical disk C drive metric, query based on this metric where the free space is above the threshold in alert rule condition:

LA Query to return the metric result:

Perf

| where ObjectName == “LogicalDisk” and CounterName == “% Free Space” and InstanceName == “C:”

| summarize arg_max(TimeGenerated, *) by InstanceName

  • 2. Save the query and select the SIGNAL in the condition.
    • Save the query in Logs pane.

Note:

Running a query in Az monitor Logs on a VM scope, does not allow the query to be saved:

You must run this query from Monitor blade > Logs, and now you can click on Save!

The query is now available for use in alerts and future searches.

Query Explorer, Saved queries.
  • Create an Alert on a workspace resource, and open the condition with saved search:

select the custom metric saved query


Understanding Volume Activation Services – Part 3 (Microsoft Office Activation and Troubleshooting)

$
0
0

Part 1 and 2 of the series were dealing with KMS, MAK, and ADBA.
In part 3 we will focus on how to activate Microsoft Office using each of the three activation methods, and explain the limitation and best practices regarding each of them. This post will also deal with activation diagnostics and troubleshooting.

Series:

Microsoft Office Activation

General information regarding Office activation

For Microsoft Office activation, and unlike the activation for Windows where the newest KMS host key can activate older versions as well, you should configure and install a dedicated KMS host key for each Microsoft Office version you use.
If, for example, There is a combination of Microsoft Office 2016 and Microsoft Office 2013 in your organization, you have to make sure there are two KMS host keys installed and available: One for 2013 and another one for Office 2016.

Another thing you should know about is the Microsoft Office Volume License Pack.
Microsoft Office Volume License Pack is an executable file which extracts and installs KMS host license files required for the KMS host service to recognize KMS host keys for Microsoft Office, including Visio and Project.
You should install the relevant Microsoft Office Volume License Pack according to the Microsoft Office version you would like to activate. The installation is performed on the server where you are running the Volume Activation Tools.
Microsoft Office Volume License Pack is required for both KMS and ADBA activation.
In case of ADBA, there is no need to install this on your Domain Controllers.

You can download Microsoft Office Volume License Pack in the following links:

Microsoft Office 2013 Volume License Pack – https://www.microsoft.com/en-us/download/details.aspx?id=35584
Microsoft Office 2016 Volume License Pack – https://www.microsoft.com/en-us/download/details.aspx?id=49164
Microsoft Office 2019 Volume License Pack – https://www.microsoft.com/en-us/download/details.aspx?id=57342

Microsoft Office GVLKs (Generic Volume License Keys), which enables Office to automatically discover and activate against a KMS server or ADBA, can be found in the following link: https://docs.microsoft.com/en-us/deployoffice/vlactivation/gvlks.
Pay attention that volume licensed versions of Office 2019 and Office 2016 are preinstalled with a Generic Volume License Key (GVLK), so no further action is required in this situation.

Using ADBA for Office Activation

Like in Windows, Office can use MAK or KMS channels for activation.
While KMS supports any Office version, Active Directory-Based Activation supports only Office 2013 and above when running on supported Windows Operating System (which is, as you know at that point, Windows 8.1/Windows Server 2012 and above).

So if, for example, you are running Office 2016 on a Windows 10 machine – you are good to go and can use Active Directory-Based Activation to activate Office.
If you are running Office 2013 on a Windows 7 client – KMS and MAK are your only options. ADBA is NOT supported in this case.

This is the place to say that KMS, ADBA, and MAK have nothing to do with Office 365 activation. Office 365 is activated automatically by cloud-based services associated with Office 365.

Like with Windows, it is recommended to use Active Directory-Based Activation if possible. If you have older operating systems or Office products like Windows 7 and Office 2010 which don’t support ADBA, use KMS along Active Directory-Based Activation. If Active Directory can’t be contacted, Microsoft Office will try to activate by using a KMS, so you shouldn’t worry about this.
MAK should only be used as a last resort where the KMS is not reachable or when the KMS limitations do not meet.

Reviewing Microsoft Office activation settings and status

Within the Microsoft Office installation folder (e.g C:\Program Files\Microsoft Office\Office16) you will find the ospp.vbs, a utility that can help you configure and test your volume licensed versions of Office, including Project and Visio.
You can think about ospp.vbs as the slmgr.vbs for Office products.

Here are the most common commands you would like to know:

  1. ospp.vbs /inpkey: XQNVK-8JYDB-WJ9W3-YJ8YR-WFG99
    This command changes the product key being used by Office. If there is already a product key configured, the /inpkey command will replace it with the provided product key. This can be relevant for situations where you have to move from MAK to KMS channel and vice-versa.
  2. ospp.vbs /act
    This command activates your Office product using the current configuration setup. After moving from MAK to KMS or when changing your activation configuration, you can run ospp.vbs /act to immediately try and activate your Office products.
  3. ospp.vbs /sethst:KMSServerName.contoso.com
    This command is used to manually configure the KMS host name that Office will try to activate with. This might come in handy in a situation where there is no _VLMCS record for auto discover the KMS server, or where the DNS service/resolving is not available for some reason.
  4. ospp.vbs /dstatus
    This command displays the license information for installed product keys. You can use the /dstatus to understand if and how your Office products are activated.

Here is a short video describing the use of ospp.vbs for displaying activation status and activate Microsoft Office with ADBA:

Troubleshoot activation issues for Windows and Office

Volume-Activation troubleshooting is quite simple if you understand how activation works and what you are doing.

There are a few common issues I would like to talk about in this post:

  1. The computer is not running a volume-licensed edition of Windows Server or Office, making it irrelevant for KMS/ADBA activation.
  2. KMS or Active Directory-Based Activation is missing the required KMS key for the specific Windows or Office version.
  3. There is an issue with your Active-Directory-Based Activation or KMS deployment (unavailable for certain reason or misconfigured).

In order to understand what is the issue, I suggest starting with the slmgr.vbs /dlv (for Windows) or with ospp.vbs /dstatus (for Microsoft Office). This will help you better understand which activation channel is configured on the client, and what is the current activation status. If your product is using MAK channel, it will look similar to screenshot below:

To resolve this, use “slmgr.vbs /ipk <Relevant Product Key Here>” for Windows or “ospp.vbs /inpkey <Relevant Product Key Here>” for Microsoft Office to change your product key and move from a MAK channel to a KMS channel.
Then, run “slmgr.vbs /ato” or “ospp.vbs /act” to try and activate your product against the KMS/Active Directory-Based Activation.

Remember that by default, volume licensed versions of both Windows and Office are installed with a Generic Volume License Key (GVLK), so if you are installing the right versions, you shouldn’t encounter this issue.

If things still don’t work at this point, check the above:

  • Make sure that the client understands who is the KMS server (usually done by the DNS using the _VLMCS SRV record). Use nslookup to perform the test. If DNS resolving is not available for the specific client for some reason, you can use “slmgr.vbs /skms <KMSHostIP>” or “ospp.vbs /sethst <KMSHostIP>” to manually set the KMS server IP that the client will use.
  • Use Telnet to check if the client can contact the KMS server using port 1688. It is quite common to see clients behind a firewall that block traffic on the KMS port.
  • If your client can’t access the KMS server and you’re sure that no firewall restrictions are set, check that your KMS server is available and running using “slmgr.vbs <KMSHostname> /dlv”. If your KMS server does not respond as expected, try to restart the Software Protection Service.
  • Remember that Active Directory-Based Activation uses basic LDAP and Domain Services ports for activation and does not require any dedicated port like the KMS.
  • When using Active Directory-Based Activation, validate that the client is a member of a domain within a forest where ADBA was configured.
  • Make sure that your KMS or ADBA contain a relevant KMS key for the client’s OS. If, for example, the client is running a Windows Server 2019 datacenter edition and your Active Directory-Based activation has a KMS key for Windows Server 2019 standard only, the activation process will fail.

Final Thoughts

Well, this is the last blog post in the “Understanding Volume Activation Services” series.
I hope that now, after reading these posts, you have a better understanding about how Volume Activation is working and how you should use ADBA, KMS, and MAK for activating Windows and Office in your organizations.

KMS

Azure MFA over NPS MFA Extension

$
0
0

https://docs.microsoft.com/en-us/azure/active-directory/authentication/howto-mfa-nps-extension

The MFA extension for NPS is the new way of integration if you dont want to host the MFA self-service onpremise.

NPS is Windows component works as a radius for integration with 3rd party applications/appliances

I just come from integrating this to F5 VPN/Portal witch and not tested by F5 team (while i’m writing this) but it works similar like Citrix, Cisco, Juniper, etc.

The trics to make it working smooth is that you must connect the 3rd party device such as F5 in my case directly to the NPS BackEnd server where you install the MFA extension.

If you use the NPS Proxy and then forward the request to the Backend NPS, it will ask 3 times for authentication !

And keep in mind you just need to add radius authentication after the login page.

Here how F5 is configured : https://devcentral.f5.com/s/articles/heres-how-i-did-it-integrating-azure-mfa-with-the-big-ip-19634

For end user experience : https://www.youtube.com/watch?v=QbDxoLivJWQ

Querying Azure Resource Graph

$
0
0

In this blog post, we’ll discuss the purpose and usage of the Azure Resource Graph. This is the first part in a series of posts, so it will updated with links to the following posts in the series as soon as they are published.

Azure Resource Graph is designed to extend Azure Resource Management by providing an efficient and performant resource exploration so that you can effectively govern your environment. It currently supports queries over basic resource fields, specifically – Resource name, ID, Type, Resource Group, Subscription, and Location. Resource Manager also provides facilities for calling individual resource providers for detailed properties one resource at a time.

It’s important to understand that Azure Resource Graph’s query language is based on the Kusto query language. It supports a number of operators and functions. Each work and operate based on Azure Data Explorer.

To query Azure Resource Graph, you’ll need at least read access to the resources you want to query, and then you can use Azure CLI (with the resource-graph extension), the SDK with REST API calls, PowerShell (with the Az.ResourceGraph module) or the the Azure Resource Graph Explorer in the Azure Portal that’s currently in preview. In this post, we will mostly explore the PowerShell method.

To install the Az.ResourceGraph PowerShell module into your computer, or to your CloudShell persistent clouddrive, use the following command:

Install-Module -Name Az.ResourceGraph

As simple example, we will query for the number of Azure resources that exist in the subscriptions that you have access to. It’s also a good query to validate you have the required permissions, and that your shell of choice has the appropriate components installed:

Search-AzGraph -Query "summarize count()"

count_
------
2294

To expand on this, we’ll add a where operator to our query, in order to filter and get only the number virtual machines we have. The type we are instrested in is “Microsoft.Compute/virtualMachines”:

Search-AzGraph -Query "where type =~ 'Microsoft.Compute/virtualMachines' | summarize count()"

count_
------
136

Note that the above command only filters in the ARM virtual machines, and you might still have some classic VMs. For this, we can add the or operator to our query:

Search-AzGraph -Query "where type == 'microsoft.compute/virtualmachines' or type == 'microsoft.classiccompute/virtualmachines' | summarize count()"

count_
------
138

In another example, we can use the tostring function, to group the results by a property string:

Search-AzGraph -Query "where type =~ 'Microsoft.Compute/virtualMachines' | summarize count() by tostring(properties.storageProfile.osDisk.osType)"

properties_storageProfile_osDisk_osType count_
--------------------------------------- ------
Windows                                 79
Linux                                   57

Using the project operator, we include, rename or drop columns, or insert new computed columns. For example, adding a SKU column for the vmSize:

Search-AzGraph -Query "where type =~ 'Microsoft.Compute/virtualMachines' | project SKU = tostring(properties.hardwareProfile.vmSize)| summarize count() by SKU"

SKU              count_
---              ------
Standard_B1ms    12
Standard_DS2_v2  25
Standard_DS3_v2  2
Standard_D8s_v3  13
...

Some other examples for useful queries include:

# Count resources by types per subscription
Search-AzGraph -Query "summarize count() by type, subscriptionId | order by type, subscriptionId asc"


# List VMs that match a regex pattern:
Search-AzGraph -Query "where type =~ 'microsoft.compute/virtualmachines' and name matches regex @'^Contoso(.*)[0-9]+$' | project name | order by name asc"


# List all VMs not using managed disks:
Search-AzGraph -Query "where type =~ 'Microsoft.Compute/virtualMachines' | where isnull(properties.storageProfile.osDisk.managedDisk) | project name, resourceGroup, subscriptionId"


# List all the Public IP Addresses:
Search-AzGraph -Query "where type contains 'publicIPAddresses' and isnotempty(properties.ipAddress) | project properties.ipAddress"


# List WebApps:
Search-AzGraph -Query "where type=='microsoft.web/sites' | project name, subscriptionId, type | order by type, subscriptionId"


# List Storage accounts:
Search-AzureRmGraph -Query "where type=='microsoft.storage/storageaccounts' | project name, resourceGroup,subscriptionId"


# List Storage accounts that don't have encryption enabled:
Search-AzGraph -Query "where type =~ 'microsoft.storage/storageaccounts' | where aliases['Microsoft.Storage/storageAccounts/enableBlobEncryption'] =='false'| project name"

If you have any feedback on Azure Resource Graph, or want to upvote other’s suggestions, see https://feedback.azure.com/forums/915958-azure-governance/category/345061-azure-resource-graph

HTH,

Martin.

Monitoring Database Query Result

$
0
0

For monitoring a result from DB, you must use either a script or a PS or VB type, when the script makes a query to the database analyze the information and alert according to the condition detection.

It is common that to monitor multiple servers with different queries, additional monitoring is required when each monitoring has the different settings.

In this article I will explain how this can be done by creating a single generic monitor, when the rest of data is defined in the parameters.

The specified monitor will receive the server name port and the query as parameters, this will allow you to create the monitor once and then override monitor on all requirements.

You can perform the task with two options:

  • One way is by using the simple wizard script unit monitor and his parameters, PS or VB script that run the script with arguments and in the wizard you can add the SQL server name and the database and query parameters.
    • Pros – easy to create and edit
    • Cons – Override is hard to maintain, in one argument line you edit all overrides fields
  • second way is by creating your custom module type, monitor type with your custom overrides
    • Pros – difference overrides blades
    • Cons – Required working with custom module

Implementation:

Option A:

Script monitor works with property bag, this is the way for a system to transfer query data to one workflow data bus, to learn on how to create simple script monitor you can use this guide – https://blogs.msdn.microsoft.com/tysonpaul/2018/08/30/how-to-analyze-a-scom-property-bag/ written by Tyson Paul.

In script you can add arguments, we can use this arguments to send needed parameters, in this example showing bellow I added three parameters, now in override i can change and targeting this monitor to another database server.

Option B:

Add new custom modules, data source module and monitor type based on this module, the result of this is:


On configuration tab you can edit all default fields directly

Override editing is more comfortable

Custom Modules:

Data source module type include the engine the run the script and capture the properties –

New monitor type now is based on data source just created, with your custom overrides’ parameters, and mapping the configuration from data source to the parameters:

Monitor collects all this information to one-unit monitor with this parameter:

The MP with those two options attached to this post, you can import the mp as is.

NOTE:

Attached MP has Run AS Profile, insert your Run as Account in it, the query will run with this user, this is required from the field and each user can be scoped to any agent, more content on how this association works can be find here – https://secureinfra.blog/2019/06/16/add-run-as-profile-to-scom-unit-monitor/

Field Notes: Azure Active Directory Connect – Federation with AD FS

$
0
0

I started off this Azure AD Connect series by going through the express installation path, where the password hash synchronization sign-in option is selected by default. This was followed by the custom installation path using pass-through authentication and a remote SQL installation. See:

Today we cover federation using Active Directory Federation Services (AD FS).

Federation with AD FS

In the previous posts on Azure AD Connect, I go through the entire installation process. The difference here is that I modify an existing installation and change the user sign-in option to AD FS, as we have already seen launching the installer from scratch twice. Selecting AD FS as a sign-in option is also exposed when the custom installation path is selected if you were to install from scratch.

Welcome to Azure AD Connect

Launching Microsoft Azure AD Connect presents the following Welcome to Azure AD Connect screen instead of the express versus custom screen we saw in the previous posts. Select configure to see available options.

The synchronization service scheduler is suspended until this setup is closed. We will cover details on this in one of the upcomming posts.

Additional tasks

We already have the latest version of Azure AD Connect installed and configured with pass-through authentication, so we’ll just select change user sign-in.

Connect to Azure AD

The Connect to Azure AD screen is also the same as what we saw in the previous two blog posts. Supply credentials of a global administrator account.

Only Azure and user accounts synchronized from on-premises directories are supported for administration. Also note that it is not possible to federate an Azure AD domain while signed in to Azure AD as a user in the same domain.

User sign-in options

This is one of the reasons we are here today – user sign-in! There are a few options available for user sign in:

  • Password Hash Synchronization, which I covered in the first part of the series
  • Pass-through authentication, which is in the second part
  • Federation with AD FS is what I am covering in this post
  • Other options are federation with PingFederate and not configuring any of the above

We select Federation with AD FS and click next to proceed. Details on requirements are in the references subsection below.

Domain Administrator credentials

Azure AD Connect requires domain administrator credentials for the domain in which AD FS will be deployed or configured. Enter a domain credential that is a local administrator on the AD FS servers.

This credential is not stored and is used only during the setup process.

AD FS Farm

This is where the installation wizard is guided on whether to install and configure a new AD FS farm or us an existing one. I select to use an existing AD FS farm that I have pre-configured in my environment.

Opting to go for configuring farm would require us to provide a password-protected PFX file containing the SSL certificate that will be used to secure the communication between AD FS and clients. AAD Connect would store the PFX file locally and we would need to ensure that a strong password has been used to protect the certificate. A short video is included below, which goes through this process.

Azure AD Domain

Select the Azure AD domain that the wizard will enable for federated sign-on.

Azure AD Trust

As we are using an existing AD FS farm, Azure AD Connect will back up the existing Azure AD relying party trust and then update it with the latest recommended claim rules and settings. Changes that will be made to the Azure AD trust are listed here.

Ready to configure

Once we proceed, the wizard will:

  • Backup any existing Azure AD relying party trusts
  • Update the Azure AD relying party trust
  • Configure Azure AD trust for the directory
  • Disable (seamless) single sign-on that was enabled as part of pass-through authentication

Configuration is complete!

Configuration has successfully been applied. The next step is going to be verifying federation settings. I have shared links specified here as references in the summary section.

Verify federation connectivity

Almost there! The next screen requires confirmation on whether intranet and extranet DNS records that will allow clients to resolve the federation service have been created. This is sts.idrockstar.co.za both internally and externally in my case. Once we verify, we should see results similar to what we have in the image below.

Option: Configuring a new AD FS farm (video)

What if we selected “configure a new AD FS farm” instead of “use an existing AD FS farm”? Below is a short video that takes us through the process.

Summary

We took a different approach of modifying an existing installation instead of installing Azure AD Connect from scratch this time around. We changed the user sign-in option from pass-through authentication to AD FS. This is just a demonstration, and I decided to change to AD FS as we have not covered it before. Different options are exposed depending on whether we are configuring a new AD FS farm or using an existing one. The former is summarized in the 1 and half minute video.

References:

Step by Step: Enforce Require LDAP Signing on domain controllers. Part 1

$
0
0

Introduction:

One of the security settings that Microsoft recommend applying on domain controllers is to Require LDAP Signing. Requiring LDAP signing is one policy setting that can be applied on a few seconds using group policy, but what is the impact of applying this setting in your production environment? In most customer environments I visited, the Require LDAP signing is not enforced because customers are scared about what can happen.  In this post I will explain the setting and set a clear action plan to safely apply the recommendation.

LAB Scenario:

domain name: lab.dz.

Server Name IP Address Role
DC01 10.0.0.4 Domain Controller
DC02 10.0.0.5 Domain Controller
MEM01 10.0.0.10 Member Server
MEM02 10.0.0.20 Member Server

Our mission is to enforce the setting Require LDAP Signing on the domain lab.dz. 😉


Understanding the policy setting.

Domain Controller: LDAP Server signing requirements.

To understand how this setting affect domain controllers we need to understand first LDAP Bind operations.

LDAP bind operations are used to authenticate clients to the directory server (clients could be users or application behind users). LDAP bind requests provide the ability to use either simple authentication or SASL authentication.

Simple Bind: Authentication happen using user name and password, password is transmitted in clear text.

SASL Bind: SASL Extensible Framework make possible to plug almost any kind of authentication to LDAP (Negotiate, Kerberos, NTLM and Digest). For more information about LDAP Bind operations please refer to this Link.


Let’s see the explanation on the policy setting.





The table below explain how the domain controller will act on different scenarios when enforcing Require LDAP Signing.

Scenario Action
Client Attempt Simple bind on clear
text (non-SSL/TLS-encryption)
Domain controller reject the simple bind
Client Attempt Simple bind over
SSL/TLS
Domain controller accept the simple bind
Client Attempt SASL bind and does
not request signing
Domain controller reject
SASL bind
Client Attempt SASL bind and
request signing
Domain controller accept
SASL bind



Now as we understand the setting, the next step is how we should proceed to enforce require LDAP signing on production environment.


Confirming that our domain controllers are not configured to Require LDAP signing.

By looking for event 2886 on the Directory Service log we confirm that DC01 and DC02 are not configured to require LDAP signing. We found the event on both DCs.

Below the message on event 2886.


From Event 2886 we can get two important information.

  • DCs are not configured with the Microsoft recommendation (Yellow paragraph).
  • if unsigned SASL binds or simple bind occur on a DC, the DC will log an event every 24 hours, the event indicate how many such bind occurred. (Blue paragraph)

Actually the event is 2887, by checking on DC01 and DC02 we can see 2887 events on both DCs

As you can see on event 2887 from DC01, 10 simple binds performed without SSL/TLS.

At this moment we know that if we enforce Require LDAP Signing setting, we will break some applications ☹. Our next challenge is to find where these binds are coming from.


Finding Servers that are using insecure binds.

We need to increase LDAP Interface logging to be able to find from which servers these binds are coming. On both domain controllers we run the command below:

New-ItemProperty -Path ‘HKLM:\SYSTEM\CurrentControlSet\Services\NTDS\Diagnostics’ -Name “16 LDAP Interface Events” -Value 2 -PropertyType DWORD -Force



After increasing LDAP Interface logging, Domain controllers will log event 2889 every time a client perform SASL bind without requesting signing or simple bind on clear text.


As you can see on the screenshot above, this bind come from 10.0.0.10 which is MEM01. We need to observe events 2889 on both DCs to find servers hosting the applications that are performing insecure binds.

On your production, you may have more than 20 domain controllers, you need to look for the event 2889 on all the DCS, don’t panic you will not do that manually 😊.

On GitHub we can download a script that will do the nice job for us, it will query events 2889 from a specific DC and give us a nice CSV with the information we need. Download the script Query-InsecureLDAPBinds.ps1.


In my scenario I will run the script to find insecure binds on DC01 and DC02.

The two screenshots below show execution and output from DC01.




As you can see, we found insecure LDAP binds coming from 10.0.0.10 and 10.0.0.20 which are MEM01 and MEM02.

Ok so if I fix application settings on these servers than I can Enforce Require LDAP Signing on my DCs.


Fixing the LDAP Application on MEM01

By checking applications, we found an LDAP tool which is configured to use Simple Bind.



The LDAP tool offer SASL bind, I can fix this easily by changing the settings.



Sure, we need to check that the new settings are working just fine.



I checked again on both DCs for new events 2889 coming from MEM01 but I can’t find anymore, the SASL bind is signed.

Cool MEM01 is OK now, let’s see MEM02.

☹ Unfortunately, after checking on MEM02, we found an old LDAP tool that support only simple bind. (No SASL that request signature).

What’s the solution for MEM02? The answer is easy, simple bind over SS/TLS.  part two of this post will show how to install a certificate on a domain controller to be able to configure Simple bind over SSL.

Part 2 : https://secureinfra.blog/2019/08/04/step-by-step-enforce-require-ldap-signing-on-domain-controllers-part-2/

Step by Step: Enforce Require LDAP Signing on domain controllers. Part 2

$
0
0

Introduction

On Part 2 of this post, I will show how to request a certificate for a domain controller to use LDAPS, we will see also why we should never use simple bind on clear text.

This post is intended to give you an action plan on how you can Enforce Require LDAP Signing on your production, please start by reading Part 1. https://secureinfra.blog/2019/08/03/step-by-step-enforce-require-ldap-signing-on-domain-controllers-part-1/

Simple LDAP Bind in action

Before configuring LDAPS on DCs, let’s see why simple bind should always pass over SSL/TLS.
On MEM02 LDAP Admin tool is configured to use simple bind on clear text, using network monitor we will inspect traffic between MEM02 and DC01 when the connection happen.

As you can see on the screenshot below, simple bind using clear text is configured on LDAP Admin tool. I’m using the user Eric 😉.


Let’s see the traffic on Network Monitor.

As you can see on the screenshot, by sniffing the network traffic I can see the username and password in clear text.

Never use Simple bind on clear text.

Configuring LDAPS.

To configure LDAPS on the domain lab.dz, we need to install a certificate on domain controllers. Below an easy example on how to request and install the certificate on DC01.

Create an inf file on DC01 with the content below

; —————– DC01Request.inf —————–
[Version]
Signature=”$Windows NT$
[NewRequest]
Subject = “CN=dc01.lab.dz” ; replace with the FQDN of your DC
KeySpec = 1
KeyLength = 2048
Exportable = TRUE
MachineKeySet = TRUE
SMIME = False
PrivateKeyArchive = FALSE
UserProtected = FALSE
UseExistingKeySet = FALSE
ProviderName = “Microsoft RSA SChannel Cryptographic Provider”
ProviderType = 12
RequestType = PKCS10
KeyUsage = 0xa0
[EnhancedKeyUsageExtension]
OID=1.3.6.1.5.5.7.3.1 ; this is for Server Authentication
[RequestAttributes]
CertificateTemplate = KerberosAuthentication
SAN=”dns=dc01.lab.dz”
;———————————————–

Create a certificate request.

certreq –new DC01Request.inf DC01Request.req

DC01Request.req is generated.

Submit your request to your enterprise CA or third-party CA.

in my scenario I’m using Active Directory Certificate Services CA installed on MEM01.

Certreq -submit -config “mem01.lab.dz\lab-mem01-ca” DC01Request.reg DC01Request.cer

Install the certificate.

Certreq -accept DC01Request.cer

Certificate is deployed and LDAPS is available.

Let’s try a simple bind over SSL.




Connection Successful

After fixing the applications on MEM01 and MEM02 we can safely enforce Require LDAP Signing on domain controllers 😊.

Enforce Require LDAP Signing

Right click on default domain controller policy and configure the setting. Domain Controller: LDAP server signing requirements.



After enforcing the setting, LDAP Admin tool is unable to access the directory server using insecure LDAP Bind.

The screenshot below shows the error message when I try a Simple Bind on clear text.


Conclusion:

Enforcing Require LDAP signing will protect your password from transiting in clear text.

You have to start with an audit to detect all applications that are performing insecure binds before enforcing Require LDAP Signing. After finding the applications you have to configure the applications with one of the following.

  • SASL bind that request signing.
  • Simple bind over SSL/TLS.

You are ready to go 😉

Thanks for reading and Good Luck.


Field Notes: Azure Active Directory Connect – Verifying Federated Login

$
0
0

I started off this Azure AD Connect series by going through the express installation path, where the password hash synchronization sign-in option is selected by default. This was followed by the custom installation path using pass-through authentication and a remote SQL installation. The latest post in the series covers federation with Active Directory Federation Services (AD FS). Refer to links below for parts 1 through 3:

Here, we look at how to use Azure AD Connect to verify federated login. We also explore other options – idp-initiated sign on and accessing the My Apps portal.

Federation verification

Federating a domain through Azure AD Connect involves verifying connectivity. During this process, we are advised by the wizard to use the verify federated login additional task to verify that a federated user can successfully log in.

Getting started

To get to these options, launch Azure AD Connect and click configure. There will be an option to manage federation on the next screen. Use this task to expose available options for managing the federation service.

AAD Connect Additional Tasks

Manage federation

Look at what we have here – all the options that are available to manage a federation service! These are for:

  • Managing the Azure AD trust
  • Federating an Azure AD domain
  • Updating the AD FS SSL certificate
  • Deploying an AD FS server
  • Deploying a Web Application Proxy server
  • Verifying federated login
Manage Federation

We will cover some of these in future blog posts. AD FS Help: https://aka.ms/adfshelp

Verifying federated login (video)

Verifying federated login is a pretty straightforward process. All we need to do is connect to Azure AD by providing global administrator credentials, followed by entering credentials of a user account we are using for verification. The following quick video takes us through this process.

Other options

Let’s cover two of the other methods we could use to verify that federation works. The first one is Idp-initiated sign on, and the other is accessing the My Apps portal.

Idp-initiated sign on

The AD FS sign-on page can be used to verify federated login. This is feature is not turned on by default in Windows Server 2016, which is what I am using in my environment. Login to the AD FS server and turn in on by using PowerShell. The command is:

Set-AdfsProperties -EnableIdPInitiatedSignonPage $true

Once this is turned on, open a browser and navigate to https://sts.idrockstar.co.za/adfs/ls/idpinitiatedsignon.htm (replace the federation service FQDN as necessary) and sign in using a federated account.

My Apps portal

The other options is to use the My Apps portal to check if you are able to successfully sign in. Open a browser and go to https://aka.ms/myapps, which will direct to Access Panel Applications https://account.activedirectory.windowsazure.com/r#/applications) after successful login. Pay attention to the address bar to see redirection to the AD FS service for authentication.

Summary

Federating a domain through Azure AD Connect involves verifying connectivity. Additionally, federated login should be verified to ensure that everything works as expected. We covered verification using Azure AD Connect, as well as using Idp Intiated sign on and accessing the My Apps portal.

References

Till next time…

Disable SCOM management pack on a group of agents

$
0
0

One of the advantages of the SCOM system is the accompanying Management Packs, these packages give us the ability to easily monitor all the system components automatically, but the monitoring by default is on all agents, what to do when you want to disable monitoring on some of the server group / Agents, for example, on test servers.

The way to disable discovery is to prevent the discovery of the base class in this Management Pack, when the selected member need to finds the Seed Class that would normally be Targeted to Windows Server.

In this list I have concentrated some of the basic Management Packs [In the future I will add more MPs] which shows the class name that we will set override = disable, for the test server group for example [group with Windows server objects] in one Override setting we disabled all discoveries for this group.

  • Windows Operating System MP >
    • Class name: “Windows Server 20XX Computer / Windows Server 20XX Operating System”
    • Discovery Name:  “Discover Windows 20XX Servers” Windows OS MP
    • Target: “Windows Server”
  • Exchange 20XX MP >
    • Class name: “Exchange 20XX <each resource name>”
    • Discovery Name:  “Exchange 20XX: Discover Microsoft Exchange Organization and Server objects”
    • Target: “Windows Server”
  • Windows Cluster MP >
    • Class name: “Windows Cluster Service”
    • Discovery Name:  “Windows Cluster Service Discovery”
    • Target: “health Service”
  • IIS MP >
    • Class name: “IIS <Version 7/8/10> Server Role”
    • Discovery Name: “IIS <Version 7/8/10> Role Discovery”
    • Target: “Windows Server 20XX Computer”
  • SQL Server MPs >   
    • Class name: “SQL Server 20XX Installation seed”
    • Discovery Name: MSSQQL 20XX:Disocver SQL Server 20XX DB Installation source (seed),
    • Target: Windows Server
  • Active Directory MP >   
    • Class name: “Windows Domain Controller”
    • Discovery Name:  “ Discover Windows Domain Controller”
    • Target: “Windows Computer”
  • Biz Talk Server All versions >
    • Class name: “BizTalk Installation”
    • Discovery Name:  “BizTalk Installation Discovery”
    • Target: “Windows Computer”
  • Dynamic CRM >
    • 2015:
    • Class name: “Microsoft Dynamic CRM Server 2011”
    • Discovery Name:  “Dynamics CRM Servers Seed Discovery”
    • Target: “Windows Server”
    • 2011:
    • Class name: “Microsoft Dynamic CRM Server”
    • Discovery Name:  “Dynamic CRM Server Seed Discovery”
    • Target: “Windows Server”
    • 4.0:
    • Class name: “Microsoft Dynamic CRM 4.0 <Role>”Dynamic CRM >
    • Discovery Name:  “Microsoft Dynamic CRM 4.0 Server
    • Target: “Windows Computer”
  • Scheduler Task >
    • Class name: “Microsoft Dynamic CRM Server”
    • Discovery Name:  “Dynamic CRM Server Seed Discovery”
    • Target: “Windows Server”
  • APM –   #Discovery .NET APM Agent >
    • Class Name: “.NET Application Monitoring Agent”
    • Discovery Name: “Discover of .NET APM Agent”
    • Target: “Windows Server”

Override APM Application discovery to add extensions, {Rule discovery} >


Class Name: “IIS X Web Server”
              Discovery Name: “IIS X Web Application Discovery”
              Target: “IIS X Web Server”

SCOM DB Fragmentation Issue

$
0
0

Sometimes SCOM environments slowness is occurring because of SQL fragmented indexes.

Fragmentation happens when the logical order of pages in an index does not match the physical order in the data file. Because fragmentation can affect the performance of some queries, you need to monitor the fragmentation level of your indexes and, if required, perform re-organize or rebuild operations on them.

When handling SCOM DB, we are interested only in fragmented indexes with more than 30% fragmentation and a page count with more than 1000

Here is a query that will list indexes on every table in the database, ordered by percentage of index fragmentation.

SELECT dbschemas.[name] as 'Schema',
dbtables.[name] as 'Table',
dbindexes.[name] as 'Index',
indexstats.alloc_unit_type_desc,
indexstats.avg_fragmentation_in_percent,
indexstats.page_count
FROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL, NULL, NULL) AS indexstats
INNER JOIN sys.tables dbtables on dbtables.[object_id] = indexstats.[object_id]
INNER JOIN sys.schemas dbschemas on dbtables.[schema_id] = dbschemas.[schema_id]
INNER JOIN sys.indexes AS dbindexes ON dbindexes.[object_id] = indexstats.[object_id]
AND indexstats.index_id = dbindexes.index_id
WHERE indexstats.avg_fragmentation_in_percent > 30 and page_count > 1000
ORDER BY indexstats.avg_fragmentation_in_percent desc

The output will be:

You need to run this query for each table:

Alter index "INDEX NAME"
ON "TABLE NAME"
REBUILD;
GO

In my case from the example above:

Alter Index idx_PerformanceData_14_PerformanceSourceInternalId
 ON PerformanceData_14
 REBUILD;
 GO

After running a rebuild for all of the indexes and tables, run the first query again to check the fragmentation status.

After not having any fragmented index in your environment the DB performance should increase, and you would see fewer delays in console reaction and reports generation.

Audit Access to C$

$
0
0

Hi Guys, a customer asked me for a visibility about who is accessing C$ on his environment, users were claiming about admins that are using domain admins privileges to access c$ on client computers. What this customer asked for is a daily report about who is accessing c$. Using Event forwarding and PowerShell we were able to have a daily email with the information we need. If you are interested follow the steps 😉.

  • Enable audit on client computers.
  • Configure Event forwarding to centralize logs on a server.
  • Script to treat events on the WEF server and send a daily csv file about who is accessing c$ on which computer.

I- Enable Audit on client computers

We will enable auditing on the client computers scope using a GPO. let’s do it.

Create and link a GPO on your target OU, LabComputers OU in my scenario.


Edit the GPO and configure the policy setting Computer Configuration –> Windows Settings –> Security Settings –> Advanced Audit Policy Configuration –> Audit Policies –> Object Access –> Audit File Share.


We configured the setting to audit success access to shares, at this point we will have an event 5140 each time a user access to a share on client computers, in our situation we are tracking c$ access.



II- Configure Event forwarding to centralize logs on a server.

1. We need to add Network Service to Event log Readers built-in group on the client computers. Let’s do it using group policy.

Create and link a GPO on your target OU. LabComputers on my scenario


We edit the GPO, Computer Configuration –> Windows Settings –> Security Settings –> Restricted Groups. Add the group Event log Readers and put Network Service as member of this group.


2. Create a subscription on the Windows Event forwarding Server. (MEM01 in my scenario)

Open Event Viewer

Click on Subscription and then Click Yes.


Right click on Subscription and select Create Subscription…

Enter a friendly name.

Select Source computer initiated and click on Select Computer Groups.


Click on Add Domain Computers.


Type Domain Computers

Click OK Twice.

Click on Select Events…


Select XML tab


Select Edit Query Manually and click Yes.


Paste the below XML filter and click OK.

<QueryList>
<Query Id=”0″ Path=”Security”>
<Select Path=”Security”>
*[System[band(Keywords,9007199254740992) and (EventID=5140)]]
and
*[EventData[Data[@Name=”ShareName”] and     (Data=”\\*\C$”)]]
</Select>
</Query>
</QueryList>

 

Click Advanced…


Select Minimize Latency.

Click OK twice.

Subscription created.



3. Configure Event Forwarding Group Policy.

Create a GPO and configure the policy setting: Configure target Subscription Manager



Enable the policy and click on Show…



Enter the URI of the event forwarder server. In my scenario MEM01.


After application of GPO on client computers (Restart is needed), events related to c$ access will be forwarded to MEM01.



III- Script to treat events on the WEF server and send a daily email

On C:\ we create a script named script1.ps1, the content of script below.

######## Begin #########################################

$FileName = Get-Date

$FileName = $FileName.ToShortDateString().Replace(‘/’,’-‘)

$FileName = “ShareAcess-” + $FileName + “.csv”

$Date = (Get-Date).AddDays(-1)

$Events = Get-WinEvent -FilterHashtable @{ LogName=’ForwardedEvents’; StartTime=$Date; Id=’5140′ }

If ($Events -ne $null)

{

Add-Content -Value “ClientComputer,TimeCreated,TargetUserName,TargetDomainName,IpAddress,ShareName” -Path C:\$FileName

ForEach ($Event in $Events) {

$eventXML = [xml]$Event.ToXml()

$clientComputer = $Event.MachineName

$TimeCreated = $Event.TimeCreated

$SubjectUserName = $eventXML.Event.EventData.Data[1].’#text’

$SubjectDomainName = $eventXML.Event.EventData.Data[2].’#text’

$IpAddress = $eventXML.Event.EventData.Data[5].’#text’

$ShareName = $eventXML.Event.EventData.Data[7].’#text’

Add-Content -Value “$clientComputer,$TimeCreated,$SubjectUserName,$SubjectDomainName,$IpAddress,$ShareName” -Path C:\$FileName

}

Send-MailMessage -Attachments C:\$FileName -From “Audit@lab.dz” -To “ITSecurity@lab.dz” -Body “Attention! Please find attached a CSV file with Share Users Access” -Subject “Share Audit Access” -SmtpServer exch.lab.dz -Port 25

}

########## END #######################################


Then we create a scheduled task to run the previous script daily

Scheduled task triggered every day at 9:00 AM as an example.


The action on the scheduled task to run the PowerShell script

if a user accessed c$ on a computer (in the scope) on the last 24 h, the ITsecurity team will receive an email with a csv attached. csv template below


By reading the first line of the csv file, the user tahri accessed c$ on MEM03 from a computer with IP address 10.0.0.10 😉

Thanks for reading 😊.

Use Azure Automation to install and configure the Log Analytics extension

$
0
0

In this post, I’ll show how you can use an Azure Automation runbook to deploy and configure the Log Analytics extension to a group of virtual machines.

In case you’re not familiar with creating a runbook, you can get started using the instructions here.

Before we get started with the code portion, there are a few things to note.

  1. This script requires the virtual machines to be powered on
  2. The virtual machine guest agent must be in a good state. Check this by looking at the Agent Status under the Properties blade of the Virtual Machine
  3. The list of VMs will be serially processed. I’ll update this post and provide the script to process the VMs in parallel at a later time.

Now, on to the fun part: The code.

Below are the required parameters for the runbook:

  • azureSubscriptionId – The unique identifier of the subscription you want to use
  • azureEnvironment – The Azure cloud environment to use. e.g, AzureCloud, AzureUSGovernment
  • LogAnalyticsWorkspaceName – The name of the Log Analytics workspace to connect the virtual machines
  • LAResourceGroup – The resource group that contains the Log Analytics workspace

The following parameters are not required. If specified, they limit the scope of the runbook to only the resource groups or virtual machines that you want to configure.

  • ResourceGroupNames – If this parameter is specified the Log Analytics extension will be deployed to all virtual machines in the resource group. The list of resource groups should be specified in JSON format – [‘rg1′,’rg2’]
  • VMNames – If this is specified, the Log Analytics extension will only be deployed to the provided virtual machines. This variable should be provided in JSON format – [‘vm1′,’vm2’]

To save yourself from copy-pasting each section, you can visit my github repo to download the full script.

<#
    .SYNOPSIS
        Installs the OMS Agent to Azure VMs with the Guest Agent

    .DESCRIPTION
        Traverses an entire subscription, resource group, or list of VMs to
        install and configure the Log Analytics extension. If no resource groups or virtual machines are provided, all VMs will have the extension installed.  

    .PARAMETER azureSubscriptionID
        Unique identifier of the Azure subscription to use

    .PARAMETER azureEnvironment
        The Azure Cloud environment to use, i.e. AzureCloud, AzureUSGovernment

    .PARAMETER LogAnalyticsWorkspaceName
        Log Analytic workspace name

    .PARAMETER LAResourceGroup
        Resource Group of Log Analytics workspace

    .PARAMETER ResourceGroupNames
        List of Resource Groups. VMs within these RGs will have the extension installed
        Should be specified in format ['rg1','rg2']

    .PARAMETER VMNames
        List of VMs to install OMS extension to
        Specified in the format ['vmname1','vmname2']

    .NOTES
        Version:        1.0
        Author:         Chris Wallen
        Creation Date:  09/10/2019        
#&gt;

#Define the parameters
Param
(
    [parameter(mandatory)]
    [string]
    $azureSubscriptionID,

    [parameter(mandatory)]
    [string]
    $azureEnvironment,

    [parameter(mandatory)]
    [string]
    $WorkspaceName,

    [parameter(mandatory)]
    [string]
    $LAResourceGroup,

    [string[]]
    $ResourceGroupNames,

    [string[]]
    $VMNames
)

In the next section, we need to configure our runbook to use our AzureRunAsAccount. This will use the credentials that are automatically created when you first create an automation account.

 $connectionName = "AzureRunAsConnection"
    try
    {
        # Get the connection "AzureRunAsConnection "
        $servicePrincipalConnection = Get-AutomationConnection -Name $connectionName

        "Logging in to Azure..."
        Add-AzureRmAccount `
            -ServicePrincipal `
            -TenantId $servicePrincipalConnection.TenantId `
            -ApplicationId $servicePrincipalConnection.ApplicationId `
            -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint `
            -EnvironmentName AzureUSGovernment
    }
    catch
    {
        if (!$servicePrincipalConnection)
        {
            $ErrorMessage = "Connection $connectionName not found."
            throw $ErrorMessage
        } 
        else
        {
            throw $_.Exception
        }
    }    
    Set-AzureRmContext -susbscriptionID $subscriptionID

Next, we’ll build the list of resource groups or virtual machines to which we want to deploy the extension.

#Define an array to hold the list of virtual machines
$vms = @()

#If no resource groups or VMs are specified. Grab all VMs in #subscription
if (-not $ResourceGroupNames -and -not $VMNames)
{
    Write-Output "No resource groups or VMs specified. Collecting all VMs"
    $vms = Get-AzureRMVM
}
#If an RG is specified but no VMs, grab all VMs in RG
elseif ($ResourceGroupNames -and -not $VMNames)
{
    foreach ($rg in $ResourceGroupNames)
    {
        Write-Output "Collecting VM facts from resource group $rg"
        $vms += Get-AzureRmVM -ResourceGroupName $rg
    }
}
#Finally, if a list of VMs is specified, grab only that list
else
{
    foreach ($VMName in $VMNames)
    {
        Write-Output "Collecting facts for VM $VMName"
        $azureResource = Get-AzureRmResource -Name $VMName
        $vms += Get-AzureRMVM -Name $VMName -ResourceGroupName $azureResource.ResourceGroupName
    }
}

Finally, we’ll add the code that deploys and configures the extension

#Configure the workspace information
$workspace = Get-AzureRmOperationalInsightsWorkspace -Name $WorkspaceName -ResourceGroupName $LAResourceGroup -ErrorAction Stop
$key = (Get-AzureRmOperationalInsightsWorkspaceSharedKeys -ResourceGroupName $LAResourceGroup -Name $WorkspaceName).PrimarySharedKey

$PublicSettings = @{"workspaceId" = $workspace.CustomerId }
$ProtectedSettings = @{"workspaceKey" = $key }

#Loop through each VM and deploy either the Linux or the Windows extension.
foreach ($vm in $vms)
{
    $vmStatus = (Get-AzureRmVM -ResourceGroupName $resourceGroupNames -Name $vm.Name -Status).Statuses.DisplayStatus[$vmStatusIndex]

    Write-Output "Processing VM: $($vm.Name)"

    if ($vmStatus -ne 'VM running')
    {
        Write-Warning -Message "Skipping VM as it is not currently powered on"
    }

     #Check to see if Linux or Windows
    if (-not $vm.OsProfile.LinuxConfiguration)
    {
        $extensions = Get-AzureRmVMExtension -ResourceGroupName $vm.ResourceGroupName -VMName $vm.Name -Name 'Microsoft.EnterpriseCloud.Monitoring' -ErrorAction SilentlyContinue

        #Make sure the extension is not already installed before attempting to install it
        if (-not $extensions)
        {
            Write-Output "Adding extension to VM: $($vm.Name)"
            $result = Set-AzureRmVMExtension -ExtensionName "Microsoft.EnterpriseCloud.Monitoring" `
                -ResourceGroupName $vm.ResourceGroupName `
                -VMName $vm.Name `
                -Publisher "Microsoft.EnterpriseCloud.Monitoring" `
                -ExtensionType "MicrosoftMonitoringAgent" `
                -TypeHandlerVersion 1.0 `
                -Settings $PublicSettings `
                -ProtectedSettings $ProtectedSettings `
                -Location $vm.Location
        }
        else
        {
            Write-Output "Skipping VM - Extension already installed"
        }
    }
    else
    {
        $extensions = Get-AzureRmVMExtension -ResourceGroupName $vm.ResourceGroupName -VMName $vm.Name -Name 'OmsAgentForLinux' -ErrorAction SilentlyContinue

        #Make sure the extension is not already installed before attempting to install it
        if (-not $extensions)
        {
            Write-Output "Adding extension to VM: $($vm.Name)"
            $result = Set-AzureRmVMExtension -ExtensionName "OmsAgentForLinux" `
                -ResourceGroupName $vm.ResourceGroupName `
                -VMName $vm.Name `
                -Publisher "Microsoft.EnterpriseCloud.Monitoring" `
                -ExtensionType "OmsAgentForLinux" `
                -TypeHandlerVersion 1.0 `
                -Settings $PublicSettings `
                -ProtectedSettings $ProtectedSettings `
                -Location $vm.Location
        }
        else
        {
            Write-Output "Skipping VM - Extension already installed"
        }
    }
}

Now that you have the runbook created, I recommend running a few tests to ensure you’re seeing the right behavior. To get started with running a test, see this article

Once you’ve tested and verified the runbook, the only things left to do are to publish it and set a recurring schedule.

I hope you all find this useful!

Viewing all 408 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>