Quantcast
Channel: Secure Infrastructure Blog
Viewing all 408 articles
Browse latest View live

SCOM Effective Monitoring Configuration – Decoded

$
0
0

Since the advent of SCOM, one question that lingers among the admins and the end consumers is What is the effective monitoring configuration for a server or an object or an alert? Microsoft answered with Export-SCOMEffectiveMonitoringConfiguration PS Cmdlet which you can use to get effective config for an Instance (Object). This will export the data as CSV file.

Stefan Roth has used this and done it in his (PowerShell) way here. This has advantage of retrieving Effective Config for all objects contained in a Computer and it displays the data in GUI while dumping the data in CSV format as well.

But I thought, I would write up this post to expose the logic behind Export-SCOMEffectiveMonitoringConfiguration which you can use it and build your own logic around it.

Effective Monitoring Configuration is evaluated between Object, Monitor/Rule and Effective Overrides for the Object (If any). In this blog post, I will decode the logic with an example of the most common requirement among end consumers – to get effective monitoring configuration for each SCOM Alert.

First step is to get Monitored Object and there are multiple ways to get using Get-SCOMClassInstance PS Cmdlet. For our scenario, we will get Monitored Object from a SCOM Alert.

From SCOM Alert:

$SCOMAlerts = Get-SCOMAlert -Name $AlertName -ResolutionState $ResolutionStateNumber

Foreach ($SCOMAlert in $SCOMAlerts) {

        $MonitoredObject = Get-SCOMClassInstance -Id $SCOMAlert.MonitoringObjectId

}

Second Step is to get the associated monitor/rule for which you need to get Effective Configuration. Get-SCOMMonitor/Get-SCOMRule lists several methods to get desired monitor/rule and for our scenario, we will get the monitor/rule for each SCOM Alert we are interested to get Effective Config.

If ($SCOMAlert.IsMonitorAlert -eq $true) {

    $Workflow = Get-SCOMMonitor -Id $SCOMAlert.MonitoringRuleId   #For Monitor Based Alert

}

Else {

    $Workflow = Get-SCOMRule -Id $SCOMAlert.MonitoringRuleId  #For Rule Based Alert

}

Third Step is to get Configuration for Monitor or Rule obtained above.

If ($Workflow.Configuration){

    #Get Workflow Default Config for Monitor Based Alert

    $Config = "<configuration>" + $Workflow.Configuration + "</configuration>"

    $WorkflowConfig = @{}

    $Config.Configuration.ChildNodes | Foreach {$WorkflowConfig[$_.Name] = $_.'#Text'}

}

Elseif ($Workflow.DataSourceCollection.Configuration){

    #Get Workflow Default Config for Rule

    $Config = "<configuration>" + $Workflow.DataSourceCollection.Configuration + "</configuration>"

    $WorkflowConfig = @{}

    $Config.Configuration.ChildNodes | Foreach {$WorkflowConfig[$_.Name] = $_.'#Text'}

    If ($WorkflowConfig.ContainsKey("Expression")) {

        If ($WorkflowConfig['Expression'] -eq $null) {$WorkflowConfig['Expression'] = $Config.Configuration.Expression.InnerText}

    }

}

Fourth Step is to get Resultant Override for Monitored Object – Monitor/Rule Pair

#Get Resultant Overrides for the Object-Monitor Pair

$Overrides = ($Object.GetResultantOverrides($Workflow)).ResultantConfigurationOverrides

$n = $Overrides.Count

Final Step is to iterate each Override and get Property and its Value from the Override. Once the Overridden Property and its Value is obtained, we must replace the Original Configuration Value for the Property with the Overridden Value.

If there are no overrides returned, the effective configuration is same as the monitor/rule configuration.

If ($n -eq 0) {

    $EffectiveConfig = $WorkflowConfig    

}

If there are Overrides,

If ($n -eq 1) {

    $Key = $Object.GetResultantOverrides($Workflow).resultantconfigurationoverrides.keys.name

    $Value = $Object.GetResultantOverrides($Workflow).resultantconfigurationoverrides.values.effectivevalue

    $change = $EffectiveConfig.GetEnumerator() | ? {$_.key -eq $Key}

    $change | % { $EffectiveConfig[$_.Key]=$Value}

}

If ($n -gt 1) {

    for ($i=0; $i -lt $n; $i++) {

        $Key = $Object.GetResultantOverrides($Workflow).resultantconfigurationoverrides.keys.name[$i]

        $Value = $Object.GetResultantOverrides($Workflow).resultantconfigurationoverrides.values.effectivevalue[$i]

        $change = $EffectiveConfig.GetEnumerator() | ? {$_.key -eq $Key}

        $change | % { $EffectiveConfig[$_.Key]=$Value }

    }

}

The Effective Configuration is thus stored in variable $EffectiveConfig. You can display it or use it further in the logic you are building.

$EffectiveConfig

One of the most effective way of using this is in the Connector Framework, if you are integrating SCOM with any ticketing tool through Orchestrator. You can use this to fetch the effective config for each alert and update the Alert Description so that it enlightens end consumers of the alert either it be Operations Team or Engineering Team and helps them in addressing the issue effectively.

Powershell Script can be download here

Happy Scripting!


SCOM Generate Alert for another Class Instance

Installing SCOM 2019 fails with “Error: :PopulateUserRoles: failed”

$
0
0

Background

I recently came across a scenario where installing SCOM 2019 fails shortly after the Operational database configuration step, specifically during the Populating User Roles sequence.

The installation account used is a member of the sysadmin SQL role and SQL server is configured to run with native security, so this behavior was not expected.

Investigating the OpsMgrSetupWizard.log


[12:48:41]: Error: :PopulateUserRoles: failed : Threw Exception.Type: System.ArgumentException, Exception Error Code: 0x80070057, Exception.Message: Value does not fall within the expected range.
[12:48:41]: Error: :StackTrace: at Microsoft.Mom.Sdk.UserRoleSetup.SetupProgram.populateUserRoles(String adminRoleGroup, String sdkAccount, InstallTypes installType, String installDirectory, Boolean overwriteExistingUsers)
at Microsoft.EnterpriseManagement.OperationsManager.Setup.ServerConfiguration.PopulateUserRoles(String adminRoleGroup, String sdkAccount, String installDirPath)
[12:48:41]: Error: :FATAL ACTION: PopulateUserRoles
[12:48:41]: Error: :FATAL ACTION: DatabaseActions

My initial thoughts were that TLS 1.2 was being enforced in the environment, but customer confirmed this was not the case. The registry on the Management Server and SQL Server did not provide any evidence of the older protocols being disabled.

Investigating the Windows System Event Log

However the Windows System Event log was flooded with the Event ID: 36871

Resolution

Investigating this Event eventually pointed me to confirm the permissions set on the MachineKeys folder. When comparing the Security permissions on C:\ProgramData\Microsoft\Crypto\RSA to a clean and working Management Server installation, the Security in my customer environment included “Network Services” which doesn’t appear to be default.

After changing the Security to align with the working Management Server, SCOM installed successfully.

Azure Application Insight – Create Dashboard from several resources

$
0
0

Introduction:

Azure Application insight its solution to keeps track of Application performance and failures, user’s behavior and more.

For an introduction, here is a few words about the service:

https://docs.microsoft.com/en-us/azure/azure-monitor/app/app-insights-overview

Once the service is configured and the monitoring is turned on, the Telemetries transfer by ‘Instrumentation key’ is set, so the data stored by the service transfers the information to unique Azure Application Insight [AI] Resource,

Views in Application Insight Resource already made for us by default, at the click of a button you can open a full data screen On all aspects of the sample service – Application Dashboard

Application Insight resource > Application dashboard


This dashboard shows all the relevant information, the information includes Alert Link and State of the current Resource, Smart detection Link and Metrics on response times Failures of several users in Real time and more

Since it is recommended to separate monitored services to various Azure AI Resources, so that the data viewer to SME will be easy to understand and maintain the system.

When it is required to display the information link between several resources for administrators for example, or for specific Metrics in which we want to receive the information on a single screen, the default dashboard no use to us since, as has been said, the information only contains telemetries data sent to the current AI resource.

Challenge:

What to do when we need Application Dashboards to display Metrics and Alerts from difference resources?

Example of information we want to view in this dashboard: Alerts with state, request and availability metrics, number of failures in one tile, etc…

Solution:

  • Create new Azure Dashboard
  • Add metrics
  • Configure alerts
  • Add links to Dashboard

New Azure Dashboard

In Azure Portal, Dashboard blade, New Dashboard.

Add metrics

Most of the data presented in the Application Dashboard is metrics:

Go to Az Portal, Application Insight Resource, and select the Metrics tab choose the one value you want to show on one of the Resource and add the Metrics

Now we can add another metric, but in the resource pane you can select another AI resource:

Before pinning you can edit the Subject so that it is clearly displayed on the screen to explain what is displayed

To add this metrics view to Dashboard, you must press on Pin to Dashboard

Built-in screens can’t change their title, some can’t even add multiple resources together, for example Application map link, can’t be joined together in one screen, but of course you can add two links..

Filter can be added to each screen, so that there is additional filtering ability in the information view.

for example, in this case I added filtering for a city request, after the change make Pin:

Configure Alerts

Now we need to Pin to dashboard the Alert screen.

But for the filter to be saved, you can only do that with the Classic alert

Go to Az portal, monitor blade, Alert, and press on “Here” in “Classic alert can be accessed from HERE”

On this screen leave the “Resource” Unfiltered and Pin to Dashboards:

After making the Pin to dashboards, we will see the following Blade in your Dashboard

You can add links to AI Search and other links but can only be set to a specific Resource.

Search in Az Dashboard, in Tile Gallery ‘Application Insights’ you can add Links to Dash.

However you can configure in link only one resource

Security – Transport Layer Security(TLS) 1.2 Calculation

$
0
0
  1. Enabling TLS and SSL on Windows machines requires you to set registry keys. https://support.microsoft.com/en-us/help/3140245/update-to-enable-tls-1-1-and-tls-1-2-as-default-secure-protocols-in-wi


2. If you want to enable more than one (In case you are scared not using TLS 1.1 or 1.0 will break your websites), you need to add up the values in Calculator in Programmer mode and choosing HEX (800+200+20) = A20

3. Now you fill in that in the registry setting by creating the DefaultSecureProtocols DWORD : HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\WinHttp\ 

No you can go ahead and deploy these setting via System Center configuration Manager or any other technology you normally use like Powershell scripts, logon scripts and more.

VIDEO

https://msit.microsoftstream.com/video/28074c3c-fa6b-4866-89a8-df8526d96216

If you have anything to add or would like to correct me in any of the steps please reach out and I will be happy to discuss.

Using SCCM DCM Feature to monitor GPO application in the environment

$
0
0

The Issue

A Common issue that keeps being experienced across customer sites, is the application of Group Policies on machines

By default when a GPO is created and linked, it should apply to all the machines that the policy was linked to, and in most cases this works pretty perfectly, however, how do you know when this is not happening?

There may be multiple reasons why the GPO’s are not applying, whether it be a corrupt-policy cache, or timing out when downloading the Policy on slower links for instance. For most of the larger environments, you do not have an easy way to check the application of the policies across your entire estate, and manually going to machines, running GPResult /h is not feasible.

This is where we can start using the DCM feature of SCCM to help you.

Desired Configuration Management in SCCM is a mechanism where we deploy Configuration Baselines, that are made up of Configuration Items (CI’s)

You can read more about the topic here

Configuration items define a discrete unit of configuration to assess for compliance. They can contain one or more elements and their validation criteria, and they typically define a unit of configuration that you want to monitor at the level of independent change.

Configuration baselines contain one or more configuration items with associated rules, and they are assigned to computers through collections, together with a compliance evaluation schedule

Now how does this apply to our topic above? well that is essentially what a GPO is as well. a List of defined settings that are bundled together, and deployed out to be enforced on machines.

So let us start putting all of this together below:

We need a combination of 3 different tools here.

  1. System Center Configuration Manager
  2. Backup of your GPO that you want to measure\remediate against
  3. Microsoft Security Compliance Manager (For converting GPO backups\baselines to DCM.cab files) can be downloaded from here

Process flow will be as follows:

  1. Either Import a GPO backup or use a Security Baseline into Security Compliance Manager
  2. Modify the baseline include the settings you want, elsewise export the GPO\baseline to SCCM DCM 2007 format
  3. Import the CAB file into SCCM
  4. Deploy the Baseline (with or without remediation settings as per below)

Pre-export step of modified Baseline\GPO

Baseline has now been exported, select the name and select Save, with location

Open SCCM – Navigate to Assets and Compliance – Compliance Settings and start the importation steps

Select the Import Configuration Data
Select the Baseline, select Open
Select to Accept the baseline that is being Imported
Review the CI’s being created as part of Baseline

The Baseline is now successfully imported, I can now proceed to review the individual CI’s

I can choose to simply deploy the Baseline (as per below)

Deploying without remediating (for simply seeing what machines are NOT compliant, reporting on what the non-compliant issues are)

We are not remediating at this point, but do want an alert if this falls below 90% successful, and want it to run every 1 hour

The Baseline has run and reported a failure, the report below has been filtered to show one setting in particular for ease of demo

By default this is up to the point where the default functionality from DCM will report on the non-compliance for you.

You can either use this to see where you have configuration drift in your organization, and target the machines for manual intervention, or if you are feeling up to it, you can edit the Individual CI’s, and add a remediation script for each setting (Powershell, VBscript etc)

Just a note though, DCM can remediate registry entries for you automatically, so if you have a CI that is calling for a specific setting “RDP is enabled = 0” for instance, you will see the option on the CI to “Remediate non-compliant rules when supported”

I would highly recommend that this is setup and used for your Critical GPO’s ( Members of Local Admin Groups, or your different security policies), to make sure that you are getting the coverage of the GPO’s that you want.

Field Notes: Azure Active Directory Connect – Troubleshooting Task Overview

$
0
0

This is a continuation of a series on Azure AD Connect. Previous parts have mostly been focusing on the installation and configuring different user sign-in options for Azure AD. Links to these are provided in the summary section below.

Now that we have covered the common setup options for Azure AD Connect, I would like to switch gears a little and discuss troubleshooting. In this post, I cover the troubleshooting task available in Azure AD Connect version 1.1.614.0 and newer.


Azure AD Connect Troubleshooting

The Azure AD troubleshooting task is triggered by selecting troubleshoot under additional tasks as depicted below.

Selecting the ‘troubleshoot’ task and clicking next presents the Welcome to Azure AD Connect Troubleshooting screen, which provides the ability to launch the troubleshooter. Click Launch to proceed.

This opens up a PowerShell window with the following options:

  • [1] Troubleshoot Object Synchronization
  • [2] Troubleshoot Password Synchronization
  • [3] Collect General Diagnostics, and
  • [Q] Quit

You may need to set the PowerShell execution policy to remote signed or unrestricted.


Let’s explore each option.


Troubleshooting Object Synchronization

Selecting the first option allows us to troubleshoot object synchronization. For this demonstration, we will focus on diagnosing object synchronization issues by pressing number 1 and hitting the enter key.

The troubleshooter enumerates a list of connectors and prompts for a distinguished name of the object of interest. This is followed by a request for the Azure AD tenant’s global administrator credentials. Next, it attempts to connect to the Azure AD tenant, and checks both the domain & OU filtering configuration.

An HTML report is generated and exported to the C:\ProgramData\AADConnect\ADSyncObjectDiagnostics folder. Below is a sample that shows object details for the on-premises directory and the Azure AD Connect database.

In the example depicted above, I reproduced a synchronization issue by using a duplicate attribute for the test account I am using. On the flip side; with an account that is successfully synchronized, we see that object details for Azure AD are also provided with information such as last directory synchronization time, immutableId, UPN, as shown below:

Do we have other options for this scenario? Yes – IdFix, Azure AD Connect Health and the Synchronization Service Manager. Let’s briefly go through each.

IDFix

IdFix identifies errors such as duplicates and formatting problems in on-premises directories before an attempt to synchronize objects to the Azure AD tenant.

In this example, we can see we have two objects with the same attribute value.

Azure AD Connect Health

Azure AD Connect Health provides robust monitoring of your on-premises identity infrastructure.

In this example, we see that contactus@idrockstar.co.za is a duplicate attribute value (Error Type:AttributeValueMustBeUnique).

Synchronization Service Manager

The Synchronization Service Manager UI is used to configure more advanced aspects of the sync engine and to see the operational aspects of the service.

Unable to update this object because the following attributes associated with this object have values that may already be associated with another object in your local directory services: [ProxyAddresses SMTP:contactus@idrockstar.co.za;]. Correct or remove the duplicate values in your local directory. Please refer to http://support.microsoft.com/kb/2647098 for more information on identifying objects with duplicate attribute values.


Troubleshooting Password Hash Synchronization

Troubleshoot Password Hash Synchronization is the second option on the main menu, which is invoked by pressing 2 and hitting the enter key. For the purpose of this demonstration, we select option 3 from the sub-menu (synchronize password hash for a specific user account). Other options are:

  • Password hash synchronization does not work at all
  • Password hash synchronization does not work for a specific user account
  • Going back to the main menu, and quitting the program

The single object password hash synchronization utility attempts to synchronize the current password hash stored in the on-premises directory for a user account. A distinguished name of an object is required as input. Let’s see two scenarios in action:

  • An attempt to synchronize a password of an object that has not yet been exported
  • Synchronizing a password of an object that has already been exported

Account not exported

I am using the account that reported errors in the troubleshooting object synchronization section above to demonstrate this. After providing the distinguished name, we see a message confirming that password hash synchronization is enabled for the connector. This is followed by a message stating that password hash synchronization has failed. This is obviously because the object has not yet been exported.


Account is exported

Now what happens if an account has already been exported? The password hash is synchronized successfully.


Collecting General Diagnostics Information

Let’s explore the last option – collect general diagnostics. With this option, the troubleshooter collects diagnostics information. The output report contains useful information such as Azure AD tenant settings, Azure AD Connect settings, sync scheduler and more:

There is also a lot of useful troubleshooting information stored in the C:\ProgramData\AADConnect\<date>-111422_ADSyncDiagnosticsReport folder.



Summary

Previous parts of this blog post series have mostly been focusing on installation and configuring different user sign-in options for Azure AD. Here’s a list for reference:

This post was an introduction to troubleshooting, covering the troubleshooting task available in Azure AD Connect.

References

Till next time…

Azure AD Best Practice: Requiring users to periodically re-confirm their authentication information

$
0
0

Disabling the authentication methods re-confirmation prevents users from updating potentially outdated information such as email or phone number and can decrease the effectiveness of Self-service Password Reset (SSPR). This may also result in password reset information being sent to an unintended recipient. The default setting in Azure AD is to require users to re-confirm authentication information every 180 days and it is recommended to maintain this configuration unless required by a defined business need.

However, this re-confirmation can be seemingly annoying so some organizations cave to complaint and disable it. As a best practice keep it enabled and set it to a more comfortable re-confirmation schedule to help better secure the user identity and keep it current.

To enable it or alter the default number of days:

  1. Login to https://portal.azure.com
  2. Click the Azure Active Directory blade in the console.
  3. Click Users
  4. Click Password reset
  5. Click Registration
  6. Change the number of days to a value other than 0 (default is 180 days).

Re-confirm authentication information


System Center Configuration Manager –“Error Deploying Windows 10 In Place Upgrades with McAfee DLP Endpoint”

$
0
0

The Issue

Trying to do an In Place Windows 10 Upgrade with McAfee DLP Endpoint fails. As soon as the Operating System is applied the machine restarts and simply starts up to the “Repair” screen.

The Investigation

In this case the In Place Upgrade was being performed by System Center Configuration Manager using an In Place Upgrade Task Sequence. This means we have some logs to go through.

After digging in smsts (%windir%\ccm\logs\smsts.log) we could see there was an extra switch added to the Windows 10 Setup.exe file.

This parameter is what is required by McAfee for you to complete your upgrade and can be viewed on their website – https://kc.mcafee.com/corporate/index?page=content&id=KB89000.

There is also a great article by a Microsoft MVP – https://www.anoopcnair.com/in-place-os-upgrade-on-mcafee-encrypted-machines-using-sccm-ts/

But even after reviewing the settings were correct the upgrade was still failing. So it was time to go look a little deeper, the Panther file.

If you do not know what the Panther file is, it is basically a folder that contains some helpful files for troubleshooting Windows upgrades. The location of this folder can differ so have a look at this link – https://support.microsoft.com/en-us/help/927521/windows-vista-windows-7-windows-server-2008-r2-windows-8-1-and-windows

The Panther directory mostly looks something like the below,

To me the two most helpful files in the Panther directory, to me, is always

1. ActionableReport.html

and

2. CompatData_xxxxxxxxxxxx.XML

The Solution

As we could see in the Actionable Report and the XML File it was clearly still DLP Endpoint causing a “Hard” block and “UpgradeBlock”. So we pointed our efforts in that direction. After some minor review of this article we figured out the DLP Endpoint version was not compatible for this upgrade to Windows 10 1809. Refer to below table and link.

After updating to a supported version the upgrade went through successfully.

If you have anything to add or would like to correct me in any of the steps please reach out and I will be happy to discuss.

Active Directory security Best Practices : Part 1

$
0
0

As The Active Directory is identified as one of the most business critical applications whose any outage can cause downtime of users and services so it need special care and high attention in terms of security , backup and health , and every day as I visiting customers there is a frequent question that I keep receiving ,

How I can secure my AD infrastructure?

The truth here that AD itself is not the actual target of the attacker but it’s the way that will enable him reaching his target whether to steal confidential data , cause an outage , gain reputation or bargain for money , …etc.

also something I like to mention that  some customers still think that as long as they have a firewall and they have security mitigation on the network level they are already protected , believe me you are not ! ,modern attacks can cross this line , so you need to follow the defense on depth concept all your layers need to be secured , network  ,servers ,applications specially now with the cloud and integrations between companies and services your users and data will be always in mobility and you need to maintain its security .

So through these series  I’m going to answer this question and I will try to simplify this as much as I can , as having secure AD infrastructure is long way to go but at least we need to maintain the basics of security and keep going step by step till we can say , okay my AD infrastructure is secured !

First let me give you quick introduction why we need to secure our AD , and the answer is so simple because it’s the repository of all identities so for the attacker to be able to gain access to his target he needs to compromise a domain account , and there is a lot of ways now to do that, you must heard about pass the hash , pass the ticket , golden ticket attacks .. etc , it’s all based on the attacker gain access to machine inside the network then tries to extract the hashes inside the RAM and move laterally till he be able to get hash of domain admin account then the whole forest will be under his control and this why we need to make this task (obtaining domain admin account) very difficult to him by securing our identities .

So here I’m going to talk about one of the main Active Directory Security mitigations,

Secure Privileged Accounts:

as as we mentioned for the attacker to gain access to his target he needs an account with a privilege , so we need to harden this task for him  , here how we can do this ,

1. Patch Patch Patch till the end of the world

  • 99% of incidents in 2014 involved vulnerabilities for which patches were released in 2013 or earlier .
  • 90% of incidents in 2013 involved vulnerabilities which were patched in 2007 .
  • Patching does not guarantee 100% security! but its mandatory if you want to maintain the basics of security .

2. Credentials Partitioning

  • Never use the same account for your daily task and administrative tasks.
  • Your admin account should be restricted from connecting to internet, email, LOB applications.
  • Maintain the tier Model which based on divided your admin accounts into three tiers, and block access between these tiers to prevent privilege escalation.
  • If you have small team that manage all tiers in that case every one of the team will need dedicated account for every tier , so we can guarantee that even if one of these account was compromised the attacker will be locked into that tier and will not be able to escalate his privilege to the higher tiers .

clip_image002[4]

3. Privileged Access workstation (PAW)

  • Use dedicated hardened workstation for the administrative tasks.
  • Must not connect to the internet, Email any LOB Application.
  • Hardened using APP whitelisting, IPsec, firewall…etc.
  • Dedicated PAW per Tier per administrator.
  • Block access between tiers.

1

4. Least Privilege 

  • Minimize the number of high privilege groups as every member increase attack surface.
  • Maintain proper delegation model based on least privilege concept.
  • Use “privilege Access management” feature available in windows sever 2016 to give temporary privilege for users and the privilege will be revoked automatically after specific amount of time.
  • Build workflow for approvals to join specific groups and this can be done by using MIM.
  • Give special attention to service accounts as they usually member of high privileged groups with password never expire , make sure they really need this privilege otherwise give them the least privilege they need to accomplish the task .

so that is all for now , our next blog will be about how we can mitigate the lateral movement of the attacker inside the environment , stay tuned Smile

Deploy Azure Kubernetes Service (AKS) to a preexisting VNET

$
0
0

I recently ran into an issue where I needed to deploy AKS in an environment with a limited number of available IP addresses. If you’ve ever deployed AKS before, you might have noticed that using the default settings creates a new VNET with a /8 CIDR range (16,777,214 hosts), which was way too large for this environment as the largest we could use could was a /23 (510 hosts).

Since AKS uses the kubenet plugin by default, the pods will be getting their IPs from a virtual network that resides inside the cluster (separate from the Azure VNET), which eliminates the need to use a large CIDR range in Azure.

The steps below will walk you through the process of deploying your cluster and using not only a preexisting VNET, but one that resides in a resource group that’s separate from your cluster resource group.

Prerequisites

Create a service principal

Most guides that walk through creating a service principal for AKS recommend doing so using the command

 $ az ad sp create-for-rbac --skip-assignment

While this works just fine, it doesn’t provide any rights to the service principal and requires you to configure a role and scope after you’ve created the AKS cluster. Instead of doing this in two steps, I prefer to use this command to handle it all at once.

$ az ad sp create-for-rbac -n AKS_SP --role contributor \
    --scopes /subscriptions/061f5e92-edf2-4389-8357-a16f71a2cbf3/resourceGroups/AKS-DEMO-RG \
            /subscriptions/061f5e92-edf2-4389-8357-a16f71a2cbf3/resourceGroups/AKS-VNET-RG

What I’m doing with the above command is setting the scope of the service principal to have contributor rights on two resources groups. The first resource group (AKS-DEMO-RG) will contain the AKS cluster and the second (AKS-VNET-RG) contains the virtual network and subnet that will be used for the cluster resources. I’m also providing a name for the service principal (AKS_SP) so it’s easy to identify later on down the road. If you use the default name it will be labeled azure-cli-yyyy-mm-dd-hh-mm-ss, which as you can see is not quite as friendly nor identifiable as AKS_SP

When the command completes, you should see the following output:

{
    "appId": "b2abba9c-ef9a-4a0e-8d8b-46d8b53d046b",
    "displayName": "AKS_SP",
    "name": "http://AKS_SP",
    "password": "2a30869c-388e-40cf-8f5f-8d99fea405bf",
    "tenant": "dbbbe410-bc70-4a57-9d46-f1a1ea293b48"
}

Make note of the appId and the password as that will be required in the next step

Create the cluster

In this section we’ll create our AKS cluster and configure the required tools to interact with it after deployment.

In the below example, replace the parameters with values that suit your environment. The Service Principal and Client Secret parameters should match the appId and password from the output of the az ad sp create command above.

 az aks create --resource-group AKS-DEMO-RG --name demoAKSCluster \
 --service-principal "b2abba9c-ef9a-4a0e-8d8b-46d8b53d046b" \
 --client-secret "2a30869c-388e-40cf-8f5f-8d99fea405bf" \
 --vnet-subnet-id "/subscriptions/061f5e92-edf2-4389-8357-a16f71a2cbf3/resourceGroups/AKS-VNET-RG/providers/Microsoft.Network/virtualNetworks/AKS-DEMO-VNET/subnets/S-1"

Install kubectl

$ sudo az aks install-cli

Fetch the credentials to use for the connection to the cluster

$ az aks get-credentials --resource-group AKS-DEMO-RG --name demoAKSCluster

You should see the following output

Merged "demoAKSCluster" as current context in /home/azureadmin/.kube/config

Test connectivity to the cluster

$ kubectl get nodes

All of your nodes should appear in a Ready status

Additionally, you should see the NIC for each of your nodes connected to the VNET/subnet you provided during deployment.

And that’s it. You now have an AKS cluster deployed using a preexisting virtual network and subnet.

In my next post, I’ll show you how to configure TLS for Helm and Tiller, and deploy an ingress-controller with SSL termination all with certificates issued by a Windows Certificate Authority.

Quick blog – Importing Updates into WSUS – CVE-2019-1367

$
0
0

a Question that was raised this week by quite a few customers is around importing updates into the SCCM environment, that are not available on WSUS, but are on Microsoft Update.

The below steps will guide you through the steps to get the updates into the environment quickly

As per the CVV article, there are a couple of updates you will have to manually import into wsus for now, should you wish to get the updates deployed as soon as possible.

https://portal.msrc.microsoft.com/en-US/security-guidance/advisory/CVE-2019-1367

The steps are as per below

In SCCM and WSUS verify that the update you want is not listed, in this case I am looking for (4522015)

On WSUS Server, select Updates, right-click – import updates (this will open a webpage to the catalog.update.microsoft.com site)

Select the KB you want and hit search

Now select the applicable ones to your environment – add to basket

View basket – ensure the “import directly into WSUS “ is enabled, then click import

Once it is completed, re-search through wsus for update

Now just sync WSUS (from within SCCM), and once done you can download\deploy the update

Active Directory Security Best Practices: Part 2

$
0
0

Hello Again Smile, this our second blog about AD security best practices in our fist blog we talked about one of the most important security mitigation which is secured privileged accounts , you can find it in the following link ,

https://secureinfra.blog/2019/09/26/active-directory-security-best-practices-part-1/

here we will talk about our second mitigation :

Slow Lateral Movement

Lets Explain first what is the lateral movement to understand why we need to prevent it , when the attacker succeed to gain access to one machine normally it will be user workstation and his target is a domain controller or any high privileged system so the first thing the attacker will do is extract the hashes inside the RAM to find one of high privileged accounts that can take him to his target or even higher tier and from there he can do the same till he reach the upper tier , so what we need to do is locked the attacker inside his compromised machine so he can’t escalate to higher tiers or even move laterally inside the same tier.

One example for that is the attacker may success to get the local administrator account and normally most organizations use same name same password for the local administrator in such case the attacker will be able to use this account to access all the machines then start moving laterally between them extract hashes till he get domain admin hash and all the kingdom will be under his control .

So here is how we can mitigate against the lateral movement and this of course side by side with secured privileged account practices that we discussed earlier :

1. Firewall

  • Do you have any business reason to allow communications between workstations ? ,so use firewall to block the traffic between workstations  or allow only the required traffic between workstations and also between workstation and applications for example if you have SCCM allow only the required ports needed by the SCCM agent installed on the machines .

image

2. GPO Based Restrictions

  • Use GPOs to restrict logon for the local administrator account through network ,so the attacker can’t use it to move laterally between workstations .

image

3. Unique Random Password for local administrator account

  • use tools like LAPS to randomize local administrator password for endpoints so if the attacker succeed to compromise the local admin account of one machine he can’t use it to access the other machines ,in the following link you will find step by step guide for LAPS deployment its free tool and very easy to implement and manage . it creates unique password for local admin on every workstation and change it automatically every 30 days by default https://gallery.technet.microsoft.com/step-by-step-deploy-local-7c9ef772

image

that is all for now , our upcoming blogs will be about other security best practices like ESAE , ATA .. etc , stay tuned Smile .

The new way to avoid exposing port 3389 in Azure – Bastion!

$
0
0

Microsoft has released the public preview for Azure Bastion, allowing an additional factor and separate subnet to be your protection from the hordes of hackers who scan the Internet every day looking for open port 3389 with easy passwords or vulnerable patch-level. And things are simpler for you as well – no more unnecessary PIP’s or jump servers to maintain, just for desktop access. Of course, many of you are already using Powershell or Azure automation, and don’t need that desktop, right?  Baston uses the HTTPS connection to Azure to then proxy your connectivity through to the specified desktops: 

 

The steps are simple, but for more details, check out the links at the conclusion.  First pick a region where the preview is supported (I used “East US”, otherwise provisioning may fail) and set up your vnet and put both a working subnet and a /27 subnet – the /27 actually has to have this special name “AzureBastionSubnet”:

Let’s also set up your subscription to take advantage of this new preview feature, by entering these in your cloud shell:

Register-AzureRmProviderFeature -FeatureName AllowBastionHost -ProviderNamespace Microsoft.Network
Register-AzureRmResourceProvider -ProviderNamespace Microsoft.Network
Get-AzureRmProviderFeature -ProviderNamespace Microsoft.Network

Once you see the status “registered” (may take a while), then when you create your virtual machine, and choose “Azure Bastion” on the Operations blade, it will select everything you need, and allow you to create the Bastion, which does use a separate public IP address (PIP): 

It will take a few minutes to deploy the resource, so go get a cup of coffee, knowing that you’ve just helped make the world a safer place. When you come back, Azure Bastion will provide you with a web logon form – upon submitting and connecting with your credentials, you’ll see an RDP tab pop open with access to your VM:

bconnect

In summary, Azure Bastion is a great new way to minimize your threat surface to cloud-hosted IaaS while still providing remote access for manual administrative tasks.  To read up more about this preview feature, check ou tthe documentation at https://aka.ms/AboutBastion or  https://azure.microsoft.com/en-us/services/azure-bastion/. 

And if you need more step-by-step help, here’s a comprehensive guide: https://docs.microsoft.com/en-us/azure/bastion/bastion-create-host-portal

For more advanced users, you can do some special tuning of the NSG’s to provide additional security: https://docs.microsoft.com/en-us/azure/bastion/bastion-nsg

P.S. Just announced, another preview feature (Windows Virtual Desktops) has JUST gone GENERAL AVAILABILITY (GA)!  

Azure AD Best Practice: When to Consider Using a Full SQL Server Instance for Azure AD Connect

$
0
0

By default, Azure AD Connect installs with SQL Express. More specifically, the default is a SQL Server 2012 Express LocalDB (a light version of SQL Server Express).

If you need to manage a higher volume of directory objects, you’ll definitely want to point the installation wizard to a different installation of SQL Server. The type of SQL Server installation can impact the performance of Azure AD Connect. And, if – like a lot of Microsoft customers – the fear of sync failure keeps you up at night, doing this could help you sleep a lot better.

SQL Express has a 10 GB size-limit which also means that there’s very little room to grow above 100,000 objects. If you are even near the 100,000 object limit, make plans to upgrade.

Azure AD Connect supports all versions of Microsoft SQL Server from 2008 R2 (with latest Service Pack) to SQL Server 2019. Microsoft Azure SQL Database, though, is not supported as a database.

Also, keep in mind that you can only have one sync engine per each SQL instance. You can’t use the same SQL Server instance for syncing FIM/MIM, DirSync and Azure AD Sync. Each would need its own SQL Server instance.

Check out how to Move Azure AD Connect database from SQL Server Express to SQL Server.


LAPS Security Concern : Computers joiners are able to see LAPS Password

$
0
0

Here we will discuss a common concern about LAPS as many customers noticed that people who join the computers to the domain can retrieve the LAPS password although they are not given the Permission to do so and because some organizations allow normal users to join their machines to the domain this consider a security risk for them   , so lets answer two question here :

Why this happens ? 

This happen because by default the joiner of the computer has creator owner privilege by default and this privilege give him a set of permissions that were defined by defaultSecurityDescriptor on the computer class in schema , the defaultSecurityDescriptor define the default security permission over the objects , for more information about it check this please https://docs.microsoft.com/en-us/windows/win32/ad/default-security-descriptor

So how we can check the defaultSecurityDescriptor for the computer class ? ,

1-Open ADSIedit , connect to schema Partition 

image

2-Right click on CN=Computer , choose Properties , the Attribute Editor , look for defaultSecurityDescriptor ,

image 

3-As you can see its in Security Descriptor Definition Language (SDDL) Format , so to be able to put it in human readable format , we run the following PowerShell commands

$defaultSD=”D:(A;;RPWPCRCCDCLCLORCWOWDSDDTSW;;;DA)(A;;RPWPCRCCDCLCLORCWOWDSDDTSW;;;AO)(A;;RPWPCRCCDCLCLORCWOWDSDDTSW;;;SY)(A;;RPCRLCLORCSDDT;;;CO)(OA;;WP;4c164200-20c0-11d0-a768-00aa006e0529;;CO)(A;;RPLCLORC;;;AU)(OA;;CR;ab721a53-1e2f-11d0-9819-00aa0040529b;;WD)(A;;CCDC;;;PS)(OA;;CCDC;bf967aa8-0de6-11d0-a285-00aa003049e2;;PO)(OA;;RPWP;bf967a7f-0de6-11d0-a285-00aa003049e2;;CA)(OA;;SW;f3a64788-5306-11d1-a9c5-0000f80367c1;;PS)(OA;;RPWP;77B5B886-944A-11d1-AEBD-0000F80367C1;;PS)(OA;;SW;72e39547-7b18-11d1-adef-00c04fd8d5cd;;PS)(OA;;SW;72e39547-7b18-11d1-adef-00c04fd8d5cd;;CO)(OA;;SW;f3a64788-5306-11d1-a9c5-0000f80367c1;;CO)(OA;;WP;3e0abfd0-126a-11d0-a060-00aa006c33ed;bf967a86-0de6-11d0-a285-00aa003049e2;CO)(OA;;WP;5f202010-79a5-11d0-9020-00c04fc2d4cf;bf967a86-0de6-11d0-a285-00aa003049e2;CO)(OA;;WP;bf967950-0de6-11d0-a285-00aa003049e2;bf967a86-0de6-11d0-a285-00aa003049e2;CO)(OA;;WP;bf967953-0de6-11d0-a285-00aa003049e2;bf967a86-0de6-11d0-a285-00aa003049e2;CO)(OA;;RP;46a9b11d-60ae-405a-b7e8-ff8a58d456d2;;S-1-5-32-560)”
$sec=New-Object System.DirectoryServices.ActiveDirectorySecurity
$sec.SetSecurityDescriptorSddlForm($defaultSD)
$acc=New-Object System.Security.Principal.NTAccount(“CREATOR OWNER”)
$sec.GetAccessRules($true,$false,[System.Security.Principal.NTAccount]) | Where-Object {$_.IdentityReference -eq $acc}

4-So if we check the output we will see here that the creator owner has this Extended Rights Permission , which allow him to read the confidential attributes

image

So this Explain why Computer joiners can retrieve the LAPS Password as they by default has creator owner privilaege which has extended right permission that allow them to read confidential attributes of the computer account they joined .

How we can Fix this ?

Actually we have two solution here :

1.First Solution:

Allow only  dedicated service accounts for computer joining that is trusted to retrieve LAPS Password or using tools like SCCM to deploy OS and Join to the domain ,

Challenge :

some issues like broken secure channel need the computer to be rejoined to the domain so in such case its not practical do to OSD deployment as it will take time also this machines of course has user profile and data , but if we dedicated service account for domain joining we can use it instead but maybe this will be too much work on the helpdesk specially if its small team .

2. Second Solution:

which actually i prefer because it has no limitation is that we remove the Extended right from the creator owner permission by updating  defaultSecurityDescriptor specially the user will still be able to join the computer to the domain but he will not be able to read LAPS Password Smile .so to adjust the defaultSecurityDescriptor and remove Extnded right permission from the Creator owner  its so simple we will just change (A;;RPCRLCLORCSDDT;;;CO) to(A;;RPLCLORCSDDT;;;CO) .

As you can see here after updating the defaultSecurityDescriptor and rerun the Powershell Commands the Extended right has gone .

image

Challenge :

We have removed the extended right but the user still the owner which by default has these two Permissions

  • WRITE_DAC permission. This permission gives security principals the ability to change permissions on an object.

  • READ_CONTROL permission. This permission gives security principals the ability to read the permissions that are assigned to an object.

So with WRITE_DAC Permission the user can change the ACL and elevate his privilege so to address this challenge starting from windows serer 2008 we have a new Security Principle called Owner Rights which can control and adjust the default Owner permissions so we can use it to allow the owner to only read the ACL not write by adding the Owner Rights security principal to objects and  specify what permissions are given to the owner of an object .

So how we do this , i simulate it on my lab i have user called DomainJoin that i gave him Prmission to join Machines , now I will try to remove the WRITE_DAC permission and allow him only to read the ACL  .

  • Before Applying the Write Owner Permission , he had the following privileges as you see the highlighted part he is able to modify permissions which i need to remove .

image

  • Now I will go to the OU of the joined computers right click Priorities , Security , then add the Owner rights and give it only read access .

2

  • choose advanced then adjust the permissions for the owner rights as needed, and make sure it apply to “this object and all descendant objects”

3 

4

  • Now lets check again the DomainJoin User effective access , he is no longer able to modify permission

image

so now you have to options to Solve this LAPS Concern , either assign specific service accounts for domain join , or adjust the defaultsecuritydescriptor and owner permission and you are safe to go .

References:

https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd125370(v=ws.10)?redirectedfrom=MSDN

https://blogs.msdn.microsoft.com/laps/2015/07/17/laps-and-permission-to-join-computer-to-domain/


Azure AD Best Practice: Using Azure AD Connect Standby for Redundancy and Failover

$
0
0

My big focus for Azure at Microsoft is in administration and identity. This includes a lot of heavy Azure AD work. I regularly help customers assess their Azure AD implementations and plans, which puts me in the unique position to hear about customer woes directly.

One of the bigger pain points I hear from customers A LOT – and the thing that keeps them awake at night – is Azure AD Connect and specifically when there’s no discernible plan for backup and failover in the event sync fails or disaster happens.

Obviously, we have some work to do to ensure customers are hearing about Azure AD Connect implementations that supply backup and redundancy, but we do have guidance on this.

As a best practice, consider installing a second Azure AD Connect server, but instead of making it active, install it as a Standby server so that the Azure AD Connect implementation looks like the following:

Standby Server

You put the Azure AD Connect server into Staging Mode during installation as shown in the next screen capture (and use the same process to change a server to standby and back again).

Staging Mode

Installing the Azure AD Connect server in this mode causes it to be active for import and synchronization, but it is prohibited from doing the actual exports that the primary sync server is performing. Essentially, this “backup server” is constantly doing collection of your on-premises Active Directory objects, mirroring what your active sync server is capturing. Doing this, you have a backup copy of your AD objects and should disaster strike, you can take the active sync server offline and quickly enable the backup server to become the master.

Also rest assured, when a server is in Standby mode, no exports occur to your on-premise Active Directory, no exports occur to Azure Active Directory, and Password synchronization and password write-back are disabled – even if the features are selected during installation. When staging mode is disabled and the backup server becomes the primary, the server immediately starts exporting, enables password sync, and enables password writeback.

Also, keep in mind that if the server is left in staging mode for an extended period of time, it can take a while for the server to synchronize all password changes that had occurred during the time period. 

Additionally, for even better protection and failover, consider putting the Primary and Standby servers in different data centers if that option is available.

For help setting up and configuring a Standby server, see: Azure AD Connect: Staging server and disaster recovery

System Center Service Manager: Working with FIPS and Report Server

$
0
0

When you browse Report Manager URL, you get an HTTP 500 error or a blank page (in case if you have disabled friendly HTTP messages) on the browser window. When you check the Reporting Services log files you would find the below error being logged:

ERROR: System.Web.HttpException: Error executing child request for Error.aspx. —> System.Web.HttpUnhandledException: Exception of type ‘System.Web.HttpUnhandledException’ was thrown. —> System.InvalidOperationException: This implementation is not part of the Windows Platform FIPS validated cryptographic algorithms.

Cause:

This is happening because FIPS is enabled on the Reporting Services server and Report Manager does not support the Local Security Policy “System cryptography: Use FIPS compliant algorithms for encryption, hashing, and signing”.

To ascertain that FIPS is enabled you can:

(1)    Check the registry key:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\fipsalgorithmpolicy

And the value of it should be set to 1.

(2)    Or else, go to Local Security Policy (Start -> Run -> secpol.msc) and then go to “Security Settings -> Local Policies -> Security Options” and on the right-side windows you should see the policies in that please look for the Policy “System cryptography: Use FIPS compliant algorithms for encryption, hashing, and signing” and checked the security thing and it should be Enabled.

How to resolve the issue?

If you do not need FIPS, go ahead and change the above mentioned registry change from 1 to 0 or else change the local security policy from Enabled state and Disabled state.

If you cannot disable FIPS, the following link is another way to work around it. With reference to https://support.microsoft.com/en-us/kb/911722, in order to get around this issue you would have to edit Report Manager’s web.config file as explained below.

File to be edited:

<system-drive>\Program Files\Microsoft SQL Server\MSRS<version>.<instance>\Reporting Services\ReportManager\Web.config

What to do?

(1)    In the Web.config file, locate the <system.web> section.

(2)    Add the following <machineKey> section to in the <system.web> section:

<machineKey validationKey=”AutoGenerate,IsolateApps” decryptionKey=”AutoGenerate,IsolateApps” validation=”3DES” decryption=”3DES”/>

(3)    Save the Web.config file.

Once the file has been changed, you would have to restart Reporting Services service for the change to become effective.

Recommendation: Take a backup of the web.config file prior to making the change.

AGPM: The case of the missing GPT.ini file – a possible workaround

$
0
0

Hey everyone, Theron (aka T-) here, Senior Consultant with Microsoft Consulting Services (MCS) specializing in Active Directory, amongst other technologies, including Advanced Group Policy Manager (AGPM).

Have you ever deployed a GPO via AGPM only to experience either of the two situations?

  • EventID 1058 (GroupPolicy) in a client’s System log

or

  • The follow message when using ‘gpupdate’ on a client:
A screenshot of a cell phone

Description automatically generated
GPUpdate message when gpt.ini is missing – Windows Server 2016

The actual details included in the message returned by ‘gpupdate’ will differ depending on the version of Windows you’re using.

So, what’s the message telling us? Well, it’s pretty self-explanatory…the gpt.ini file located in \\<domain.fqdn>\SysVol\<domain.fqdn>\Policies\{7F2C98CE-3BEE-4CDB-A815-DEF1E2897706}\ is missing. {7F2C98CE-3BEE-4CDB-A815-DEF1E2897706} is the GUID of the GPO in question, so it will obviously differ in each situation.

Now, what happened? Well, that’s the tricky part and I have yet to find an actual cause to the situation. From my research and discussion internally with colleagues at Microsoft, no one else has either. Frustrating, right? Fear not, we may have found a viable work around to prevent it.

In case you didn’t know or think about it, simply re-deploying the policy in question via AGPM usually solves the current GPO’s file issue.

Workaround that may work for you:

I currently support a customer who is dealing with this issue just about every time they deploy a policy. It had gotten to the point that each time a deployment was executed, the person deploying the policy would have to check the GPO folder in SYSVOL to make sure the gpt.ini file was there. Doesn’t sound very efficient, does it? Yeah, I agree.

During some “let’s throw darts at the wall and see what sticks” troubleshooting of this problem, we decided to create an AD DS Site containing one domain controller and put the AGPM server into that site. Basically, we created an AD DS Subnet with one IP address (/32), that of the AGPM server, and assigned it to the newly created site. The thought process was to eliminate the use of any additional domain controller in the original Site the AGPM server was a member of; there were four. The next thing we did was ensure the GPMC being used for deployments during our testing was using that domain controller.

Well, wouldn’t you know it, the issue hasn’t occured since. Each subsequent deployment of policies has yielded the expected results…the GPO works and no issues on the clients! We’re still evaluating and monitoring the situation and yes, SYSVOL is still being checked after each deployment. Once we’re confident the issue is gone, hopefully that won’t have to happen.

The next step on our list of things to do is to move the FSMO roles, more importantly the PDCe to the domain controller for the AGPM Site. Since GPMC defaults to the PDCe, unless changed, by moving it to the AGPM Site, each time a policy is deployed, the domain controller in the AGPM Site will be used. For those of you that don’t know, the AGPM server will randomly pick a domain controller in its AD DS Site when you’re managing policies vs. using the domain controller your GPMC is using. Weird, huh?

Well, that’s all for now. If we have any further development with our testing, positive or negative, I’ll make sure to provide an update.

Roll Tide!

T-

AD: Discover what you’ve got

$
0
0

Hey everyone, Theron (aka T-) here, Senior Consultant with Microsoft Consulting Services (MCS) specializing in Active Directory.

I wrote a really basic script that will scour your domain and return some valuable information regarding its configuration. There are probably several things in the script that could be done differently and if I was to go through it again, I’d probably change them, but this was quickly thrown together over a year ago for me to fulfill a customer’s request.

The script is written in PowerShell and located here.

It performs the following:

    – Writes outputs to the console.
        – Also creates a transcript output in your Documents folder.
    – Gets forest and domain information.
    – Gets forest and domain functional levels.
    – Gets domain creation date.
    – Gets FSMO role holders.
    – Gets AD schema version.
    – Gets tombstone lifetime.
    – Gets domain password policy.
    – Gets AD backup information.
    – Checks to see if AD Recycle Bin is enabled.
    – Gets AD Sites and Subnets.
    – Gets AD Site replication links.
    – Gets AD trust information.
    – Gets users and groups information.
        – Number of users
        – Number of groups
        – Inactive accounts based on 30, 60, 90 days.
    – Lists OUs with blocked inheritance.
    – Lists unlinked GPOs.
    – Lists duplicate SPNs.

Enjoy.

Roll Tide!

T-

Viewing all 408 articles
Browse latest View live