Quantcast
Channel: Ask the Directory Services Team
Viewing all 130 articles
Browse latest View live

MD5 Signature Hash Deprecation and Your Infrastructure

$
0
0

Hi everyone, David here with a quick announcement.

Yesterday, MSRC announced a timeframe for deprecation of built-in support for certificates that use the MD5 signature hash. You can find more information here:

http://blogs.technet.com/b/srd/archive/2013/08/13/cryptographic-improvements-in-microsoft-windows.aspx

Along with this announcement, we've released a framework which allows enterprises to test their environment for certificates that might be blocked as part of the upcoming changes (Microsoft Security Advisory 2862966). This framework also allows future deprecation of other weak cryptographic algorithm to be streamlined and managed via registry updates (pushed via Windows Update).

 

Some Technical Specifics:

This change affects certificates that are used for the following:

  • server authentication
  • code signing  
  • time stamping
  • Other certificate usages that used MD5 signature hash algorithm will NOT be blocked.

For code signing certificates, we will allow signed binaries that were signed before March 2009 to continue to work, even if the signing cert used MD5 signature hash algorithm.

Note:  Only certificates issued under a root CA in the Microsoft Root Certificate program are affected by this change.  Enterprise issued certificates are not affected (but should still be updated).

 

What this means for you:

1) If you're using certificates that have an MD5 signature hash (for example, if you have older web server certificates that used this hashing algorithm), you will need to update those certificates as soon as possible.  The update is planned to release in February 2014; make sure anything you have that is internet facing has been updated by then.

You can find out what signature hash was used on a certificate by simply pulling up the details of that certificate's public key on any Windows 8 or Windows Server 2012 machine.  Look for the signature hash algorithm that was used. (The certificate in my screenshot uses sha1, but you will see md5 listed on certificates that use it).

If you are on Server Core or have an older OS, you can see the signature hash algorithm by using certutil -v against the certificate.

2) Start double-checking your internal applications and certificates to insure that you don't have something older that's using an MD5 hash.  If you find one, update it (or contact the vendor to have it updated).

3) Deploy KB 2862966 in your test and QA environments and use it to test for weaker hashes (You are using test and QA environments for your major applications, right?).  The update allows you to implement logging to see what would be affected by restricting a hash.  It's designed to allow you to get ahead of the curve and find the potential weak spots in your environment.

Sometimes security announcements like this can seem a little like overkill, but remember that your certificates are only as strong as the hashing algorithm used to generate the private key.  As computing power increases, older hashing algorithms become easier for attackers to crack, allowing them to more easily fool computers and applications into allowing them access or executing code.  We don't release updates like this lightly, so make sure you take the time to inspect your environments and fix the weak links, before some attacker out there tries to use them against you.

--David "Security is everyone's business" Beach


Important Announcement: AD FS 2.0 and MS13-066

$
0
0

Hi everyone, Adam and JR here with an important announcement.

We’re tracking an important issue in support where some customers who have installed security update MS13-066 on their AD FS 2.0 servers are experiencing authentication outages.  This is due to a dependency within the security update on certain versions of the AD FS 2.0 binaries.  Customers who are already running ADFS 2.0 RU3 before installing the update should not experience any issues.

We have temporarily suspended further downloads of this security update until we have resolved this issue for all ADFS 2.0 customers. 

Our Security and AD FS product team are working together to resolve this with their highest priority.  We’ll have more news for you soon in a follow-up post.  In the meantime, here is what we can tell you right now.

 

What to Watch For

If you have installed KB 2843638 or KB 2843639 on your AD FS server, you may notice the following symptoms:

  1. Federated sign-in fails for clients.
  2. Event ID 111 in the AD FS 2.0/Admin event log:

The Federation Service encountered an error while processing the WS-Trust request. 

Request type: http://schemas.xmlsoap.org/ws/2005/02/trust/RST/Issue 

Additional Data 

Exception details: 

System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.TypeLoadException: Could not load
type ‘Microsoft.IdentityModel.Protocols.XmlSignature.AsymmetricSignatureOperatorsDelegate' from assembly 'Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'.


   at Microsoft.IdentityServer.Service.SecurityTokenService.MSISSecurityTokenService..ctor(SecurityTokenServiceConfiguration securityTokenServiceConfiguration)

   --- End of inner exception stack trace ---

   at System.RuntimeMethodHandle._InvokeConstructor(Object[] args, SignatureStruct& signature, IntPtr declaringType)

   at System.Reflection.RuntimeConstructorInfo.Invoke(BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)

   at System.RuntimeType.CreateInstanceImpl(BindingFlags bindingAttr, Binder binder, Object[] args, CultureInfo culture, Object[] activationAttributes)

   at Microsoft.IdentityModel.Configuration.SecurityTokenServiceConfiguration.CreateSecurityTokenService()

   at Microsoft.IdentityModel.Protocols.WSTrust.WSTrustServiceContract.CreateSTS()

   at Microsoft.IdentityModel.Protocols.WSTrust.WSTrustServiceContract.CreateDispatchContext(Message requestMessage, String requestAction, String responseAction, String
trustNamespace, WSTrustRequestSerializer requestSerializer, WSTrustResponseSerializer responseSerializer, WSTrustSerializationContext serializationContext)

   at Microsoft.IdentityModel.Protocols.WSTrust.WSTrustServiceContract.BeginProcessCore(Message requestMessage, WSTrustRequestSerializer requestSerializer, WSTrustResponseSerializer responseSerializer, String requestAction, String responseAction, String trustNamespace, AsyncCallback callback, Object state)

System.TypeLoadException: Could not load type 'Microsoft.IdentityModel.Protocols.XmlSignature.AsymmetricSignatureOperatorsDelegate' from assembly 'Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'.

   at Microsoft.IdentityServer.Service.SecurityTokenService.MSISSecurityTokenService..ctor(SecurityTokenServiceConfiguration securityTokenServiceConfiguration)

 

What to do if the problem occurs:

  1. Uninstall the hotfixes from your AD FS servers.
  2. Reboot any system where the hotfixes were
    removed.
  3. Check back here for further updates.

We’ll update this blog post with more information as it becomes available, including links to any followup posts about this problem.

Locked or not? Demystifying the UI behavior for account lockouts

$
0
0

Hello Everyone,

This is Shijo from our team in Bangalore once again.  Today I’d like to briefly discuss account lockouts, and some UI behaviors that can trip admins up when dealing with account lockouts.

If you’ve ever had to troubleshoot an account lockout issue, you might have noticed that sometimes accounts appear to be locked on some domain controllers, but not on others.  This can be very confusing since you
typically know that the account has been locked out, but when you inspect individual DCs, they don’t reflect that status.  This inconsistency happens because of some minor differences in the behavior of the UI between Windows Server 2003, Windows Server 2008, Windows Server 2008 R2, and Windows Server 2012.

Windows Server 2003

In Windows Server 2003 the "Account is locked out" checkbox can be cleared ONLY if the account is locked out on the domain controller you are connected to. This means that if an account has been locked out, but the local DC has not yet replicated that information, you CANNOT unlock the account on the local DC.

Windows 2003 account properties for an unlocked account.  Note that the checkbox is grayed out.

 

Windows Server 2008 and Windows Server 2008 R2

In Windows Server 2008/2008 R2 the "Unlock account" checkbox will always be available (regardless of the status of the account). You can tell whether the local DC knows if the account is locked out by looking at the label on the checkbox as shown in the screenshots below:

Windows 2008 account properties showing the “Unlock Account” checkbox.  Notice that the checkbox is available regardless of the status of the account on the local DC.

 

Windows 2008 (and higher) Account Properties dialog box showing locked account on this domain controller

 

If the label on the checkbox is just "Unlock account" then this means that the domain controller you are connected to recognizes the account as unlocked. This does NOT mean that the account is not locked on other DCs, just that the specific DC we're working with has not replicated a lockout status yet.  However, unlike Windows Server 2003, if the local DC doesn’t realize that the account is locked, you DO have ability to unlock it from this interface by checking the checkbox and applying the change.

We changed the UI behavior in Windows Server 2008 to help administrators in large environments unlock accounts faster when required, instead of having to wait for replication to occur, then unlock the account, and then wait for replication to occur again.

 

Windows Server 2012

We can also unlock the accounts using the Active Directory Administrative Center (available in Windows Server 2008 R2 and later).  In Windows Server 2012, this console is the preferred method of managing accounts in Active Directory. The screen shots are present below about how we can go about doing that.

You can see from the account screenshot that the account is locked which is denoted by the padlock symbol. To unlock the account you would have to click on the “Unlock account” tab and you would see a
change in the symbol as can be seen below.

 

You can also unlock the account using the PowerShell command shown in the screenshot below.

 

In this example, I have unlocked the user account named test by simply specifying the DN of the account to unlock . You can modify your powershell command to incorporate many more switches, the details of which are present in the
following article.

http://technet.microsoft.com/en-us/library/ee617234.aspx

Hopefully this helps explain why the older operating systems behave slightly differently from the newer ones, and will help you the next time you have to deal with an account that is locked out in your environment! 

 

If you’re looking for more information on Account Lockouts, check out the following links:

Troubleshooting Account Lockout

Account Lockout Policy

Account Lockout Management Tools

 

Shijo “UNLOCK IT” Joy

Windows Server 2012 Shell game

$
0
0

Here's the scenario, you just downloaded the RTM ISO for Windows Server 2012 using your handy, dandy, "wondermus" Microsoft TechNet subscription. Using Hyper-V, you create a new virtual machine, mount the ISO and breeze through the setup screen until you are mesmerized by the Newton's cradle-like experience of the circular progress indicator

clip_image002

Click…click…click…click-- installation complete; the computer reboots.

You provide Windows Server with a new administrator password. Bam: done! Windows Server 2012 presents the credential provider screen and you logon using the newly created administrator account, and then…

Holy Shell, Batman! I don't have a desktop!

clip_image004

Hey everyone, Mike here again to bestow some Windows Server 2012 lovin'. The previously described scenario is not hypothetical-- many have experienced it when they installed the pre-release versions of Windows Server 2012. And it is likely to resurface as we move past Windows Server 2012 general availability on September 4. If you are new to Windows Server 2012, then you're likely one of those people staring at a command prompt window on your fresh installation. The reason you are staring at command prompt is that Windows Server 2012's installation defaults to Server Core and in your haste to try out our latest bits, you breezed right past the option to change it.

This may be old news for some of you, but it is likely that one or more of your colleagues is going to perform the very actions that I describe here. This is actually a fortunate circumstance as it enables me to introduce a new Windows Server 2012 feature.

clip_image006

There were two server installation types prior to Windows Server 2012: full and core. Core servers provide a low attack surface by removing the Windows Shell and Internet Explorer completely. However, it presented quite a challenge for many Windows administrators as Windows PowerShell and command line utilities were the only methods used to manage the servers and its roles locally (you could use most management consoles remotely).

Those same two server installation types return in Windows Server 2012; however, we have added a third installation type: Minimal Server Interface. Minimal Server Interface enables most local graphical user interface management tasks without requiring you to install the server's user interface or Internet Explorer. Minimal Server Interface is a full installation of Windows that excludes:

  • Internet Explorer
  • The Desktop
  • Windows Explorer
  • Windows 8-style application support
  • Multimedia support
  • Desktop Experience

Minimal Server Interface gives Windows administrators - who are not comfortable using Windows PowerShell as their only option - the benefit a reduced attack surface and reboot requirement (i.e., on Patch Tuesday); yet GUI management while the ramp on their Windows PowerShell skills.

clip_image008

"Okay, Minimal Server Interface seems cool Mike, but I'm stuck at the command prompt and I want graphical tools. Now what?" If you were running an earlier version of Windows Server, my answer would be reinstall. However, you're running Windows Server 2012; therefore, my answer is "Install the Server Graphical Shell or Install Minimal Server Interface."

Windows Server 2012 enables you to change the shell installation option after you've completed the installation. This solves the problem if you are staring at a command prompt. However, it also solves the problem if you want to keep your attack surface low, but simply are a Windows PowerShell guru in waiting. You can choose Minimal Server Interface ,or you can decided to add the Server Graphical Interface for a specific task, and then remove it when you have completed that management task (understand, however, that switching between the Windows Shell requires you to restart the server).

Another scenario solved by the ability to add the Server Graphical Shell is that not all server-based applications work correctly on server core, or you cannot management them on server core. Windows Server 2012 enables you to try the application on Minimal Server Interface and if that does not work, and then you can change the server installation to include the Graphical Shell, which is the equivalent of the Server GUI installation option during the setup (the one you breezed by during the initial setup).

Removing the Server Graphical Shell and Graphical Management Tools and Infrastructure

Removing the Server shell from a GUI installation of Windows is amazingly easy. Start Server Manager, click Manage, and click Remove Roles and Features. Select the target server and then click Features. Expand User Interfaces and Infrastructure.

To reduce a Windows Server 2012 GUI installation to a Minimal Server Interface installation, clear the Server Graphical Shell checkbox and complete the wizard. To reduce a Windows Server GUI installation to a Server Core installation, clear the Server Graphical Shell and Graphical Management Tools and Infrastructure check boxes and complete the wizard.

clip_image010

Alternatively, you can perform these same actions using the Server Manager module for Windows PowerShell, and it is probably a good idea to learn how to do this. I'll give you two reasons why: It's wicked fast to install and remove features and roles using Windows PowerShell and you need to learn it in order to add the Server Shell on a Windows Core or Minimal Server Interface installation.

Use the following command to view a list of the Server GUI components

clip_image011

Get-WindowsFeature server-gui*

Give your attention to the Name column. You use this value with the Remove-WindowsFeature and Install-WindowsFeature PowerShell cmdlets.

To remove the server graphical shell, which reduces the GUI server installation to a Minimal Server Interface installation, run:

Remove-WindowsFeature Server-Gui-Shell

To remove the Graphical Management Tools and Infrastructure, which further reduces a Minimal Server Interface installation to a Server Core installation.

Remove-WindowsFeature Server-Gui-Mgmt-Infra

To remove the Graphical Management Tools and Infrastructure and the Server Graphical Shell, run:

Remove-WindowsFeature Server-Gui-Shell,Server-Gui-Mgmt-Infra

Adding Server Graphical Shell and Graphical Management Tools and Infrastructure

Adding Server Shell components to a Windows Server 2012 Core installation is a tad more involved than removing them. The first thing to understand with a Server Core installation is the actual binaries for Server Shell do not reside on the computers. This is how a Server Core installation achieves a smaller footprint. You can determine if the binaries are present by using the Get-WindowsFeature Windows PowerShell cmdlets and viewing the Install State column. The Removed value indicates the binaries that represent the feature do not reside on the hard drive. Therefore, you need to add the binaries to the installation before you can install them. Another indicator that the binaries do not exist in the installation is the error you receive when you try to install a feature that is removed. The Install-WindowsFeature cmdlet will proceed along as if it is working and then spend a lot of time around 63-68 percent before returning an error stating that it could not add the feature.

clip_image015

To stage Server Shell features to a Windows Core Installation

You need to get our your handy, dandy media (or ISO) to stage the binaries into the installation. Windows installation files are stored in WIM files that are located in the \sources folder of your media. There are two .WIM files on the media. The WIM you want to use for this process is INSTALL.WIM.

clip_image017

You use DISM.EXE to display the installation images and their indexes that are included in the WIM file. There are four images in the INSTALL.WIM file. Images with the index of 1 and 3 are Server Core installation images for Standard and Datacenter, respectively. Images with the indexes 2 and 4 are GUI installation of Standards and Datacenter, respectively. Two of these images contain the GUI binaries and two do not. To stage these binaries to the current installation, you need to use indexes 2 and 4 because these images contain the Server GUI binaries. An attempt to stage the binaries using indexes 1 or 3 will fail.

You still use the Install-WindowsFeature cmdlets to stage the binaries to the computer; however, we are going to use the -source argument to inform Install-WindowsFeature the image and index it should use to stage the Server Shell binaries. To do this, we use a special path syntax that indicates the binaries reside in a WIM file. The Windows PowerShell command should look like

Install-WindowsFeature server-gui-mgmt-infra,server-gui-shell -source:wim:d:\sources\install.wim:4

Pay particular attention to the path supplied to the -source argument. You need to prefix the path to your installation media's install.wim file with the keyword wim: You need to suffix the path with a :4, which represents the image index to use for the installation. You must always use an index of 2 or 4 to install the Server Shell components. The command should exhibit the same behavior as the previous one and proceeds up to about 68 percent, at which point it will stay at 68 percent for a quite a bit, (if it is working). Typically, if there is a problem with the syntax or the command it will error within two minutes of spinning at 68 percent. This process stages all the graphical user interface binaries that were not installed during the initial setup; so, give it a bit of time. When the command completes successfully, it should instruct you to restart the server. You can do this using Windows PowerShell by typing the Restart-Computer cmdlets.

clip_image019

Give the next reboot more time. It is actually updating the current Windows installation, making all the other components aware the GUI is available. The server should reboot and inform you that it is configuring Windows features and is likely to spend some time at 15 percent. Be patient and give it time to complete. Windows should reach about 30 percent and then will restart.

clip_image021

It should return to the Configuring Windows feature screen with the progress around 45 to 50 percent (these are estimates). The process should continue until 100 percent and then should show you the Press Ctrl+Alt+Delete to sign in screen

clip_image023

Done

That's it. Consider yourself informed. The next time one of your colleagues gazes at their accidental Windows Server 2012 Server Core installation with that deer-in-the-headlights look, you can whip our your mad Windows PowerShell skills and turn that Server Core installation into a Minimal Server Interface or Server GUI installation in no time.

Mike

"Voilà! In view, a humble vaudevillian veteran, cast vicariously as both victim and villain by the vicissitudes of Fate. This visage, no mere veneer of vanity, is a vestige of the vox populi, now vacant, vanished. However, this valorous visitation of a by-gone vexation, stands vivified and has vowed to vanquish these venal and virulent vermin van-guarding vice and vouchsafing the violently vicious and voracious violation of volition. The only verdict is vengeance; a vendetta, held as a votive, not in vain, for the value and veracity of such shall one day vindicate the vigilant and the virtuous. Verily, this vichyssoise of verbiage veers most verbose, so let me simply add that it's my very good honor to meet you and you may call me V."

Stephens

AD FS 2.0 RelayState

$
0
0

Hi guys, Joji Oshima here again with some great news! AD FS 2.0 Rollup 2 adds the capability to send RelayState when using IDP initiated sign on. I imagine some people are ecstatic to hear this while others are asking “What is this and why should I care?”

What is RelayState and why should I care?

There are two protocol standards for federation (SAML and WS-Federation). RelayState is a parameter of the SAML protocol that is used to identify the specific resource the user will access after they are signed in and directed to the relying party’s federation server.
Note:

If the relying party is the application itself, you can use the loginToRp parameter instead.
Example:
https://adfs.contoso.com/adfs/ls/idpinitiatedsignon.aspx?loginToRp=rpidentifier

Without the use of any parameters, a user would need to go to the IDP initiated sign on page, log in to the server, choose the relying party, and then be directed to the application. Using RelayState can automate this process by generating a single URL for the user to click and be logged in to the target application without any intervention. It should be noted that when using RelayState, any parameters outside of it will be dropped.

When can I use RelayState?

We can pass RelayState when working with a relying party that has a SAML endpoint. It does not work when the direct relying party is using WS-Federation.

The following IDP initiated flows are supported when using Rollup 2 for AD FS 2.0:

  • Identity provider security token server (STS) -> relying party STS (configured as a SAML-P endpoint) -> SAML relying party App
  • Identity provider STS -> relying party STS (configured as a SAML-P endpoint) -> WIF (WS-Fed) relying party App
  • Identity provider STS -> SAML relying party App

The following initiated flow is not supported:

  • Identity provider STS -> WIF (WS-Fed) relying party App

Manually Generating the RelayState URL

There are two pieces of information you need to generate the RelayState URL. The first is the relying party’s identifier. This can be found in the AD FS 2.0 Management Console. View the Identifiers tab on the relying party’s property page.

image

The second part is the actual RelayState value that you wish to send to the Relying Party. It could be the identifier of the application, but the administrator for the Relying Party should have this information. In this example, we will use the Relying Party identifier of https://sso.adatum.com and the RelayState of https://webapp.adatum.com

Starting values:
RPID: https://sso.adatum.com
RelayState: https://webapp.adatum.com

Step 1: The first step is to URL Encode each value.

RPID: https%3a%2f%2fsso.adatum.com
RelayState: https%3a%2f%2fwebapp.adatum.com

Step 2: The second step is to take these URL Encoded values, merge it with the string below, and URL Encode the string.

String:
RPID=<URL encoded RPID>&RelayState=<URL encoded RelayState>

String with values:
RPID= https%3a%2f%2fsso.adatum.com &RelayState= https%3a%2f%2fwebapp.adatum.com

URL Encoded string:
RPID%3dhttps%253a%252f%252fsso.adatum.com%26RelayState%3dhttps%253a%252f%252fwebapp.adatum.com

Step 3: The third step is to take the URL Encoded string and add it to the end of the string below.

String:
?RelayState=

String with value:
?RelayState=RPID%3dhttps%253a%252f%252fsso.adatum.com%26RelayState%3dhttps%253a%252f%252fwebapp.adatum.com

Step 4: The final step is to take the final string and append it to the IDP initiated sign on URL.

IDP initiated sign on URL:
https://adfs.contoso.com/adfs/ls/idpinitiatedsignon.aspx

Final URL:
https://adfs.contoso.com/adfs/ls/idpinitiatedsignon.aspx?RelayState=RPID%3dhttps%253a%252f%252fsso.adatum.com%26RelayState%3dhttps%253a%252f%252fwebapp.adatum.com

The result is an IDP initiated sign on URL that tells AD FS which relying party STS the login is for, and also gives that relying party information that it can use to direct the user to the correct application.

image

Is there an easier way?

The multi-step process and manual manipulation of the strings are prone to human error which can cause confusion and frustration. Using a simple HTML file, we can fill out the starting information into a form and click the Generate URL button.

image

The code sample for this HTML file has been posted to CodePlex.

Conclusion and Links

I hope this post has helped demystify RelayState and will have everyone up and running quickly.

AD FS 2.0 RelayState Generator
http://social.technet.microsoft.com/wiki/contents/articles/13172.ad-fs-2-0-relaystate-generator.aspx
HTML Download
https://adfsrelaystate.codeplex.com/

AD FS 2.0 Rollup 2
http://support.microsoft.com/kb/2681584

Supporting Identity Provider Initiated RelayState
http://technet.microsoft.com/en-us/library/jj127245(WS.10).aspx

Joji "Halt! Who goes there!" Oshima

So long and thanks for all the fish

$
0
0

My time is up.

It’s been eight years since a friend suggested I join him on a contract at Microsoft Support (thanks Pete). Eight years since I sat sweating in an interview with Steve Taylor, trying desperately to recall the KDC’s listening port (his hint: “German anti-tank gun”). Eight years since I joined 35 new colleagues in a training room and found that despite my opinion, I knew nothing about Active Directory (“Replication of Absent Linked Object References– what the hell have I gotten myself into?”).

Eight years later, I’m a Senior Support Escalation Engineer, a blogger of some repute, and a seasoned world traveler who instructs other ‘softies about Windows releases. I’ve created thousands of pages of content and been involved in countless support cases and customer conversations. I am the last of those 35 colleagues still here, but there is proof of my existence even so. It’s been the most satisfactory work of my career.

Just the thought of leaving was scary enough to give me pause – it’s been so long since I knew anything but supporting Windows. It’s a once in a lifetime opportunity though and sometimes you need to reset your career. Now I’ll help create the next generations of Windows Server and the buck will finally stop with me: I’ve been hired as a Program Manager and am on my way to Seattle next week. I’m not leaving Microsoft, just starting a new phase. A phase with a lot more product development, design responsibility, and… meetings. Soooo many meetings.

There are two types of folks I am going to miss: the first are workmates. Many are support engineers, but also PFEs, Consultants, and TAMs. Even foreigners! Interesting and funny people fill Premier and Commercial Technical Support and make every day here enjoyable, even after the occasional customer assault. There’s nothing like a work environment where you really like your colleagues. I’ve sat next to Dave Fisher since 2004 and he’s made me laugh every single day. He is a brilliant weirdo, like so many other great people here. You all know who you are.

The other folks are… you. Your comments stayed thought provoking and fresh for five years and 700 posts. Your emails kept me knee deep in mail sacks and articles (I had to learn in order to answer many of them). Your readership has made AskDS into one of the most popular blogs in Microsoft. You unknowingly played an immense part in my career, forcing me to improve my communication; there’s nothing like a few hundred thousand readers to make you learn your craft.

My time as the so-called “editor in chief” of AskDS is over, but I imagine you will still find me on the Internet in my new role, yammering about things that I think you’ll find interesting. I also have a few posts in the chamber that Jonathan or Mike will unload after I’m gone, and they will keep the site going. AskDS will continue to be a place for unvarnished support information about Windows technologies, where your questions will get answers.

Thanks for everything, and see you again soon.

image
We are looking forward to Seattle’s famous mud puddles

 

- Ned “42” Pyle

Digging a little deeper into Windows 8 Primary Computer

$
0
0

[This is a ghost of Ned past article – Editor]

Hi folks, Ned here again to talk more about the Primary Computer feature introduced in Windows 8. Sharp-eyed readers may have noticed this lonely beta blog post and if you just want a set-by-step guide to enabling this feature, TechNet does it best. Today I am going to fill in some blanks and make sure the feature's architecture and usefulness is clear. At least, I'm going to try.

Onward!

Backgrounder and Requirements

Businesses using Roaming User Profiles, Offline Files and Folder Redirection have historically been limited in controlling which computers cache user data. For instance, while there are group policies to assign roaming profiles on a per computer basis, they affect all users of that computer and are useless if youassign roaming profiles through legacy user attributes.

Windows 8 introduces a pair of new per-user AD DS attributes to specify a "primary computer." The primary computer is the one directly assigned to a user - such as their laptop, or a desktop in their cubicle - and therefore unlikely to change frequently. We refer to this as "User-Device Affinity". That computer will allow them to store roaming user data or access redirected folder data, as well as allow caching of redirected data through offline files. There are three main benefits to using Primary Computer:

  1. When a user is at a kiosk, using a conference room PC, or connecting to the network from a home computer, there is no risk that confidential user data will cache locally and be accessible offline. This adds a measure of security.
  2. Unlike previous operating systems, an administrator now has the ability to control computers that will not cache data, regardless of the user's AD DS profile configuration settings.
  3. The initial download of a profile has a noticeable impact on logon performance; a brand new Windows 8 user profile is ~68MB in size, and that's before it's filled with "Winter is coming" meme pics. Since a roaming profile and folder redirection no longer synchronously cache data on the computer during logon, a user connecting from a temporary or home machine logs on considerably faster.

By assigning computer(s) to a user then applying some group policies, you ensure data only roams or caches where you want it.


Yoink, stolen screenshot from a much better artist

Primary Computer has the following requirements:

  • Windows 8 or Windows Server 2012 computers used for interactive logon
  • Windows Server 2012 AD DS Schema (but not necessarily Win2012 DCs)
  • Group Policy managed from Windows 8 or Windows Server 2012 GPMC
  • Some mechanism to determine each user's primary computer(s)

Determining Primary Computers

There is no attribute in Active Directory that tracks which computers a user logs on to, much less the computers they log on to the most frequently. There are a number of out of band options to determine computer usage though:

  • System Center Configuration Manager - SCCM has built in functionality to determine the primary users of computers, as part of its "Asset Intelligence" reporting. You can read more about this feature in SCCM 2012 and 2007 R2. This is the recommended method as it's the most comprehensive and because I like money.
  • Collecting 4624 events - the Security event log Logon Event 4624 with a Logon Type 2 delineates where a user logged on interactively. By collecting these events using some type of audit collection service or event forwarding, you can build up a picture of which users are logging on to which computers repeatedly.

     

     

  • Logon Script– If you're the fancy type, you can create a logon script that writes a user's computer to a centralized location, such as on their own AD object. If you grant inherited access for SELF to update (for instance) the Comment attribute on all the user objects, each user could use that attribute as storage. Then you can collect the results for a few weeks and create a list of computer usage by user.

    For example, this rather hokey illustration VBS runs as a logon script and updates a user's own Comment attribute with their computer's distinguished name, only if it has changed from the previous value:

    Set objSysInfo = CreateObject("ADSystemInfo")

    Set objUser = GetObject("LDAP://" & objSysInfo.UserName)

    Set objComputer = GetObject("LDAP://" & objSysInfo.ComputerName)

     

    strMessage = objComputer.distinguishedName

    if objUser.Comment = StrMessage then wscript.quit

     

    objUser.Comment = strMessage

    objUser.SetInfo

    

A user may have more than one computer they logon to regularly though and if that's the case, an AD attribute-based storage solution is probably not the right answer unless the script builds a circular list with a restricted number of entries and logic to ensure it does not update with redundant data. Otherwise, there could be excessive AD replication. Remember, this is just a simple example to get the creative juices flowing.

  • PsLoggedOn - you can script and run PsLoggedOn.exe (a Windows Sysinternals tool) periodically during the day for all computers over the course of several weeks. That would build, over time, a list of which users frequent which computers. This requires remote registry access through the Windows Firewall.
  • Third parties - there are SCCM/SCOM-like vendors providing this functionality. I don't have details but I'm sure they have a salesman who wants a new German sports sedan and will be happy to bend your ear.

Setting the Primary Computer

As I mentioned before, look at TechNet for some DSAC step-by-step for setting the msDS-PrimaryComputer attribute and the necessary group policies. However, if you want to use native Windows PowerShell instead of our interesting out of band module, here are some more juice-flow inducing samples.

The ActiveDirectory Windows PowerShell module get-adcomputer and set-aduser cmdlets allow you to easily retrieve a computer's distinguished name and assign it to the user's primary computer attribute. You can use assigned variables for readability, or with nested functions for simplicity.

Variable

<$variable> = get-adcomputer <computer name>

Set-aduser <user name> -add @{'msDS-PrimaryComputer'="<$variable>"}

For example, with a computer named cli1 and a user name stduser:

Nested

Set-aduser <user name> -add @{'msDS-PrimaryComputer'=(get-adcomputer <computer name>).distinguishedname}

For example, with that same user and computer:

Other techniques

If you use AD DS to store the user's last computer in their Comment attribute as part of a logon script - like described in the earlier section - here is an example that reads the stduser attribute Comment and assigns primary computer based on the contents:

If you wanted to assign primary computers to all of the users within the Foo OU based on their comment attributes, you could use this example:

If you have a CSV file that contains the user accounts and their assigned computers as DNs, you can use the import-csv cmdlet to update the users. For example:

This is particularly useful when you have some asset history and assign certain users specific computers. Certainly a good idea for insurance and theft prevention purposes, regardless.

Cached Data Clearing GP

Enabling Primary Computer does not remove any data already cached on other computers that a user does not access again. I.e. if a user was already using Roaming User Profiles or Folder Redirection (which, by default, automatically adds all redirected shell folders to the Offline Files cache), enabling Primary Computer means only that further data is not copied locally to non-approved computers.

In the case of Roaming User Profiles, several policies can clear data from computers at logoff or restart:

  • Delete user profiles older than a specified number of days on system restart - this deletes unused profiles after N days when a computer reboots
  • Delete cached copies of roaming profiles - this removes locally saved roaming profiles once a user logs off. This policy would also apply to Primary Computers and should be used with caution

In the case of Folder Redirection and Offline Files, there is no specific policy to clear out stale data or delete cached data at logoff like there is for RUP, but that's immaterial:

  • When a computer needs to remove FR after to becoming "non-primary" - due to the primary computer feature either being enabled or the machine being removed from the primary computer list for the user - the removal behavior will depend on how the FR policy is configured to behave on removal. It can be configured to either:
    • Redirect the folder back to the local profile– the folder location sets back to the default location in the user's profile (e.g., c:\users\%USERNAME%\Documents), the data copies from the file server to the local profile, and the file server location is unpinned from the computer's Offline Files cache
    • Leave the folder pointing to the file server–the folder location still points to the file server location, but the contents are unpinned from the computer's Offline Files cache. The folder configuration is no longer controlled through policy

In both cases, once the data is unpinned from the Offline Files cache, it will evict from the computer in the background after 15 minutes.

Logging Primary Computer Usage

To see that the Download roaming profiles on primary computers only policy took effect and the behavior at each user logon, examine the User Profile Service operational event log for Event 63. This will state either "This computer is a primary computer for this user" or "This computer is not a primary computer for this user":

The new User Profile Service events for Primary Computer are all in the Operational event log:

Event ID

62

Severity

Warning

Message

Windows was unable to successfully evaluate whether this computer is a primary computer for this user. This may be due to failing to access the Active Directory server at this time. The user's roaming profile will be applied as configured. Contact the Administrator for more assistance. Error: %1

Notes and resolution

Indicates an issue contacting LDAP on a domain controller. Examine the extended error, examine System and Application event logs for further details, consider getting a network capture if still unclear

 

Event ID

63

Severity

Informational

Message

This computer %1 a primary computer for this user

Notes and resolution

This event's variable will change from "IS" to "IS NOT" depending on circumstances. It is not an error condition unless this is unexpected to the administrator. A customer should interrogate the rest of the IT staff on the network if not expecting to see these events

 

Event ID

64

Severity

Informational

Message

The primary computer relationship for this computer and this user was not evaluated due to %1

Notes and resolution

Examine the extended error for details.

 

To see that the Redirect folders on primary computers only policy took effect and the behavior at each user logon, examine the Folder Redirection operational event log for Event 1010. This will state "This computer is not a primary computer for this user" or if it is (good catch, Johan from Comments)

Architecture

Windows 8 implements Primary Computer through two new AD DS attributes in the Windows Server 2012 (version 56) Schema.

Primary Computer is a client-side feature; no matter what you configure in Active Directory or group policy on domain controllers, Windows 7, Windows Server 2008 R2, and older family computers will not obey the settings.

AD DS Schema

Attribute

Explanation

msDS-PrimaryComputer

The primary computers assigned to a user or a security group containing users. Contains a multi-valued linked-value distinguished names that references the msDS-isPrimaryComputerFor backlink on a computer object

msDS-isPrimaryComputerFor

The users assigned to a computer account. Contains a multi-valued linked-value distinguished names that references the msDS-PrimaryComputer forward link on a user object

 

Processing

The processing of this new functionality is:

  1. Look at Group Policy setting to determine if the msDS-PrimaryComputer attribute in Active Directory should influence the decision to roam the user's profile or apply Folder Redirection.
  2. If step 1 is TRUE, initialize an LDAP connection and bind to a domain controller
  3. Check for the required schema version
  4. Query for the "msDS-IsPrimaryComputerFor" attribute on the AD object representing the current computer
  5. Check to see if the current user is in the list returned by this attribute or in the group returned by this attribute and if so, return TRUE for IsPrimaryComputerForUser. If no match is found, return FALSE for IsPrimaryComputerForUser
  6. If step 5 is FALSE:
    1. For RUP, an existing cached local profile should be used if present. If there is no local profile for the user, a new local profile should be created
    2. For FR, if Folder Redirection previously applied, the Folder Redirection configuration removes according to the removal action specified by the previously applied policy (this is retained in the local FR configuration). If there is no current FR configuration, there is no work to be done

Troubleshooting

Because this feature is both new and simple, most troubleshooting is likely to follow this basic workflow when Primary Computer is not working as expected:

  1. User assigned the correct computer distinguished name (or in the security group assigned the computer DN)
  2. AD DS replication has converged for the user and computer objects
  3. AD DS and SYSVOL replication has converged for the Primary Computer group policies
  4. Primary Computer group policies applying to the computer
  5. User has logged off and on since the Primary Computer policies applied

The logs of note for troubleshooting Primary Computer are:

Log

Notes and Explanation

Gpresult/GPMC RSoP Report

Validates that Primary Computer policy is applying to the computer or user

Group Policy operational Event log

Validates that group policy in general is applying to the computer or user with specific details

System Event Log

Validates that group policy in general is applying to the computer or user with generalities

Application Event log

Validates that Folder Redirection and Roaming User Profiles are working with generalities and specific details

Folder Redirection operational event log

Validates that Folder Redirection is working with specific details

User Profile Service operational event log

Validates that Roaming User Profile is working with specific details

Fdeploy.log

Validates that Folder Redirection is working with specific details

 

Cases reported by your users or help desk as Primary Computer processing issues are more likely to be AD DS replication, SYSVOL replication, group policy, folder redirection, or roaming user profile issues. Determine immediately if Primary Computer is at all to blame, then move on to the more likely historical culprits. Watch for red herrings!

Likewise, your company may not be internally aware of Primary Computer deployments and may send you down a rat hole troubleshooting expected behavior. Always ensure that a "problem" with folder redirection or roaming user profiles isn't just another group within the customer's company configuring Primary Computer and not telling you (this applies to you too; send a memo, dangit!).

Have fun.

Ned "shouldn't we have called it 'Primary Computers?'" Pyle

....And knowing is half the battle!


ADAMSync 101

$
0
0

Hi Everyone, Kim Nichols here again, and this time I have an introduction to ADAMSync. I take a lot of cases on ADAM and AD LDS and have seen a number of problems arise from less than optimally configured ADAMSync XML files. There are many sources of information on ADAM/AD LDS and ADAMSync (I'll include links at the end), but I still receive lots of questions and cases on configuring ADAM/AD LDS for ADAMSync.

We'll start at the beginning and talk about what ADAM/AD LDS is, what ADAMSync is and then finally how you can get AD LDS and ADAMSync working in your environment.

What is ADAM/AD LDS?

ADAM (Active Directory Application Mode) is the 2003 name for AD LDS (Active Directory Lightweight Directory Services). AD LDS is, as the name describes, a lightweight version of Active Directory. It gives you the capabilities of a multi-master LDAP directory that supports replication without some of the extraneous features of an Active Directory domain controller (domains and forests, Kerberos, trusts, etc.). AD LDS is used in situations where you need an LDAP directory but don't want the administration overhead of AD. Usually it's used with web applications or SQL databases for authentication. Its schema can also be fully customized without impacting the AD schema.

AD LDS uses the concept of instances, similar to that of instances in SQL. What this means is one AD LDS server can run multiple AD LDS instances (databases). This is another differentiator from Active Directory: a domain controller can only be a domain controller for one domain. In AD LDS, each instance runs on a different set of ports. The default instance of AD LDS listens on 389 (similar to AD).

Here's some more information on AD LDS if you're new to it:

What is ADAMSync?

In many scenarios, you may want to store user data in AD LDS that you can't or don't want to store in AD. Your application will point to the AD LDS instance for this data, but you probably don't want to manually create all of these users in AD LDS when they already exist in AD. If you have Forefront Identity Manager (FIM), you can use it to synchronize the users from AD into AD LDS and then manually populate the AD LDS specific attributes through LDP, ADSIEdit, or a custom or 3rd party application. If you don't have FIM, however, you can use ADAMSync to synchronize data from your Active Directory to AD LDS.

It is important to remember that ADAMSync DOES NOT synchronize user passwords! If you want the AD LDS user account to use the same password as the AD user, then userproxy transformation is what you need. (That's a topic for another day, though. I'll include links at the end for userproxy.)

ADAMSync uses an XML file that defines which data will synchronize from AD to AD LDS. The XML file includes the AD partition from which to synchronize, the object types (classes or categories), and attributes to synchronize. This file is loaded into the AD LDS database and used during ADAMSync synchronization. Every time you make changes to the XML file, you must reload the XML file into the database.

In order for ADAMSync to work:

  1. The MS-AdamSyncMetadata.LDF file must be imported into the schema of the AD LDS instance prior to attempting to install the XML file. This LDF creates the classes and attributes for storing the ADAMSync.xml file.
  2. The schema of the AD LDS instance must already contain all of the object classes and attributes that you will be syncing from AD to AD LDS. In other words, you can't sync a user object from AD to AD LDS unless the AD LDS schema contains the User class and all of the attributes that you specify in the ADAMSync XML (we'll talk more about this next). There is a blog post on using ADSchemaAnalyzer to compare the AD schema to the AD LDS schema and export the differences to an LDF file that can be imported into AD LDS.
  3. Unless you plan on modifying the schema of the AD LDS instance, your instance should be named DC=<partition name>, DC=<com or local or whatever> and not CN=<partition name>. Unfortunately, the example in the AD LDS setup wizard uses CN= for the partition name.  If you are going to be using ADAMSync, you should disregard that example and use DC= instead.  The reason behind this change is that the default schema does not allow an organizationalUnit (OU) object to have a parent object of the Container (CN) class. Since you will be synchronizing OUs from AD to AD LDS and they will need to be child objects of your application partition head, you will run into problems if your application partition is named CN=.




    Obviously, this limitation is something you can change in the AD LDS schema, but simply naming your partition with DC= name component will eliminate the need to make such a change. In addition, you won't have to remember that you made a change to the schema in the future.

The best advice I can give regarding ADAMSync is to keep it as simple as possible to start off with. The goal should be to get a basic XML file that you know will work, gradually add attributes to it, and troubleshoot issues one at a time. If you try to do too much (too wide of object filter or too many attributes) in the XML from the beginning, you will likely run into multiple issues and not know where to begin in troubleshooting.

KEEP IT SIMPLE!!!

MS-AdamSyncConf.xml

Let's take a look at the default XML file that Microsoft provides and go through some recommendations to make it more efficient and less prone to issues. The file is named MS-AdamSyncConf.XML and is typically located in the %windir%\ADAM directory.

<?xml version="1.0"?>
<doc>
<configuration>
<description>sample Adamsync configuration file</description>
<security-mode>object</security-mode>
<source-ad-name>fabrikam.com</source-ad-name> <------ 1
<source-ad-partition>dc=fabrikam,dc=com</source-ad-partition> <------ 2
<source-ad-account></source-ad-account> <------ 3
<account-domain></account-domain> <------ 4
<target-dn>dc=fabrikam,dc=com</target-dn> <------ 5
<query>
<base-dn>dc=fabrikam,dc=com</base-dn> <------ 6
<object-filter>(objectClass=*)</object-filter> <------ 7
<attributes> <------ 8
<include></include>
<exclude>extensionName</exclude>
<exclude>displayNamePrintable</exclude>
<exclude>flags</exclude
<exclude>isPrivelegeHolder</exclude>
<exclude>msCom-UserLink</exclude>
<exclude>msCom-PartitionSetLink</exclude>
<exclude>reports</exclude>
<exclude>serviceprincipalname</exclude>
<exclude>accountExpires</exclude>
<exclude>adminCount</exclude>
<exclude>primarygroupid</exclude>
<exclude>userAccountControl</exclude>
<exclude>codePage</exclude>
<exclude>countryCode</exclude>
<exclude>logonhours</exclude>
<exclude>lockoutTime</exclude>
</attributes>
</query>
<schedule>
<aging>
<frequency>0</frequency>
<num-objects>0</num-objects>
</aging>
<schtasks-cmd></schtasks-cmd>
</schedule> <------ 9
</configuration>
<synchronizer-state>
<dirsync-cookie></dirsync-cookie>
<status></status>
<authoritative-adam-instance></authoritative-adam-instance>
<configuration-file-guid></configuration-file-guid>
<last-sync-attempt-time></last-sync-attempt-time>
<last-sync-success-time></last-sync-success-time>
<last-sync-error-time></last-sync-error-time>
<last-sync-error-string></last-sync-error-string>
<consecutive-sync-failures></consecutive-sync-failures>
<user-credentials></user-credentials>
<runs-since-last-object-update></runs-since-last-object-update>
<runs-since-last-full-sync></runs-since-last-full-sync>
</synchronizer-state>
</doc>

Let's go through the default XML file by number and talk about what each section does, why the defaults are what they are, and what I typically recommend when working with customers.

  1. <source-ad-name>fabrikam.com</source-ad-name> 

    Replace fabrikam.com with the FQDN of the domain/forest that will be your synchronization source

  2. <source-ad-partition>dc=fabrikam,dc=com</source-ad-partition> 

    Replace dc=fabrikam,dc=com with the DN of the AD partition that will be the source for the synchronization

  3. <source-ad-account></source-ad-account> 

    Contains the account that will be used to authenticate to the source forest/domain. If left empty, the credentials of the logged on user will be used

  4. <account-domain></account-domain> 

    Contains the domain name to use for authentication to the source domain/forest. This element combined with <source-ad-account> make up the domain\username that will be used to authenticate to the source domain/forest. If left empty, the domain of the logged on user will be used.

  5. <target-dn>dc=fabrikam,dc=com</target-dn>

    Replace dc=fabrikam,dc=com with the DN of the AD LDS partition you will be synchronizing to.

    NOTE: In 2003 ADAM, you were able to specify a sub-ou or container of the of the ADAM partition, for instance OU=accounts,dc=fabrikam,dc=com. This is not possible in 2008+ AD LDS. You must specify the head of the partition, dc=fabrikam,dc=com. This is publicly documented here.

  6. <base-dn>dc=fabrikam,dc=com</base-dn>

    Replace dc=fabrikam,dc=com with the base DN of the container in AD that you want to synchronize objects from.

    NOTE: You can specify multiple base DNs in the XML file, but it is important to note that due to the way the dirsync engine works the entire directory will still be scanned during synchronization. This can lead to unexpectedly long synchronization times and output in the adamsync.log file that is confusing. The short of this this is that even though you are limiting where to synchronize objects from, it doesn't reduce your synchronization time and you will see entries in the adamsync.log file that indicate objects being processed but not written. This can make it appear as though ADAMSync is not working correctly if your directory is large but you are syncing is a small percentage of the directory. Also, the log will grow and grow, but it may take a long time for objects to begin to appear in AD LDS. This is because the entire directory is being enumerated, but only a portion is being synchronized.

  7. <object-filter>(objectClass=*)</object-filter>

    The object filter determines which objects will be synchronized from AD to AD LDS. While objectClass=* will get you everything, do you really want or need EVERYTHING? Consider the amount of data you will be syncing and the security implications of having everything duplicated in AD LDS. If you only care about user objects, then don't sync computers and groups.

    The filter that I generally recommend as a starting point is:

    (&#124;(objectCategory=Person)(objectCategory=OrganizationalUnit))

    Rather than objectClass=User, I recommend objectCategory=Person. But, why, you ask? I'll tell you :-) If you've ever looked that the class of a computer object, you'll notice that it contains an objectClass of user.



    What this means to ADAMSync is that if I specify an object filter of objectClass=user, ADAMSync will synchronize users and computers (and contact objects and anything else that inherits from the User class). However, if I use objectCategory=Person, I only get actual user objects. Pretty neat, eh?

    So, what does this &#124; mean and why include objectCategory=OrganizationalUnit? The literal &#124; is the XML representation of the | (pipe) character which represents a logical OR. True, I've seen customers just use the | character in the XML file and not have issues, but I always use the XML rather than the | just to be certain that it gets translated properly when loaded into the AD LDS instance. If you need to use an AND rather than an OR, the XML for & is &amp;.

    You need objectCategory=OrganizationalUnit so that objects that are moved within AD get synchronized properly to AD LDS. If you don't specify this, the OUs that contain objects within scope of the object filter will be created on the initial creation of the object in AD LDS. But, if that object is ever MOVED in the source AD, ADAMSync won't be able to synchronize that object to the new location. Moving an object changes the full DN of the object. Since we aren't syncing the OUs the object just "disappears" from an ADAMSync perspective and never gets updated/moved.

    If you need groups to be synchronized as well you can add (objectclass=group) inside the outer parentheses and groups will also be synced.

    (&#124;(objectCategory=Person)(objectCategory=OrganizationalUnit)(objectClass=Group))

  8. <attributes>

    The attributes section is where you define which attributes to synchronize for the object types defined in the <object-filter>.

    You can either use the <include></include> or <exclude></exclude> tabs, but you cannot use both.

    The default XML file provided by Microsoft takes the high ground and uses the <exclude></exclude> tags which really means include all attributes except the ones that are explicitly defined within the <exclude></exclude> element. While this approach guarantees that you don't miss anything important, it can also lead to a lot of headaches in troubleshooting.

    If you've ever looked at an AD user account in ADSIEdit (especially in an environment with Exchange), you'll notice there are hundreds of attributes defined. Keeping to my earlier advice of "keep it simple", every attribute you sync adds to the complexity.

    When you use the <exclude></exclude> tags you don't know what you are syncing; you only know what you are not syncing. If your application isn't going to use the attribute then there is no reason to copy that data to AD LDS. Additionally, there are some attributes and classes that just won't sync due to how the dirsync engine works. I'll include the list as I know it at the end of the article. Every environment is different in terms of which schema updates have been made and which attributes are being used. Also, as I mentioned earlier, if your AD LDS schema does not contain the object classes and attributes that you have defined in your ADAMSync XML file you're your synchronization will die in a big blazing ball of flame.


    Whoosh!!

    A typical attributes section to start out with is something like this:

    <include>objectSID</include> <----- only needed for userproxy
    <include>userPrincipalName</include> <----- must be unique in AD LDS instance
    <include>displayName</include>
    <include>givenName</include>
    <include>sn</include>
    <include>physicalDeliveryOfficeName</include>
    <include>telephoneNumber</include>
    <include>mail</include>
    <include>title</include>
    <include>department</include>
    <include>manager</include>
    <include>mobile</include>
    <include>ipPhone</include>
    <exclude></exclude>

    Initially, you may even want to remove userPrincipalName, just to verify that you can get a sync to complete successfully. Synchronization issues caused by the userPrincipalName attribute are among the most common ADAMSync issues I see. Active Directory allows multiple accounts to have the same userPrincipalName, but ADAMSync will not sync an object if it has the same userPrincipalName of an object that already exists in the AD LDS database.

    If you want to be a superhero and find duplicate UPNs in your AD before you attempt ADAMSync, here's a nifty csvde command that will generate a comma-delimited file that you can run through Excel's "Highlight duplicates" formatting options (or a script if you are a SUPER-SUPERHERO) to find the duplicates.

    csvde -f upn.csv -s localhost:389 -p subtree -d "DC=fabrikam,DC=com" -r "(objectClass=user)" -l sAMAccountName,userPrincipalName

    Remember, you are targeting your AD with this command, so the localhost:389 implies that the command is being run on the DC. You'll need to replace "DC=fabrikam, DC=com" with your AD domain's DN.

  9. </schedule>

    After </schedule> is where you would insert the elements to do user proxy transformation. In the References section, I've included links that explain the purpose and configuration of userproxy. The short version is that you can use this section of code to create userproxy objects rather than AD LDS user class objects. Userproxy objects are a special class of user that links back to an Active Directory domain account to allow the AD LDS user to utilize the password of their corresponding user account in AD. It is NOT a way to logon on to AD from an external network. It is a way to allow an application that utilizes AD LDS as its LDAP directory to authenticate a user via the same password they have in AD. Communication between AD and AD LDS is required for this to work and the application that is requesting the authentication does not receive a Kerberos ticket for the user.

    Here is an example of what you would put after </schedule> and before </configuration>

    <user-proxy>
    <source-object-class>user</source-object-class>
    <target-object-class>userProxyFull</target-object-class>
    </user-proxy>

Installing the XML file

OK! That was fun, wasn't it? Now that we have an XML file, how do we use it? This is covered in a lot of different materials, but the short version is we have to install it into the AD LDS instance. To install the file, run the following command from the ADAM installation directory (%windir%\ADAM):

Adamsync /install localhost:389 CustomAdamsync.xml

The command above assumes you are running it on the AD LDS server, that the instance is running on port 389 and that the XML file is located in the path of the adamsync command.

What does this do exactly, you ask? The adamsync install command copies the XML file contents into the configurationFile attribute on the AD LDS application partition head. You can view the attribute by connecting to the application partition via LDP or through ADSIEdit. This is a handy thing to know. You can use this to verify for certain exactly what is configured in the instance. Often there are several versions of the XML file in the ADAM directory and it can be difficult to know which one is being used. Checking the configurationFile attribute will tell you exactly what is configured. It won't tell you which XML file was used, but at least you will know the configuration.

The implication of this is that anytime you update the XML file you must reinstall it using the adamsync /install command otherwise the version in the instance is not updated. I've made this mistake a number of times during troubleshooting!

Synchronizing with AD

Finally, we are ready to synchronize! Running the synchronization is the "easy" part assuming we've created a valid XML file, our AD LDS schema has all the necessary classes and attributes, and the source AD data is without issue (duplicate UPN is an example of a known issue).

From the ADAM directory (typically %windir%\ADAM), run the following command:

Adamsync /sync localhost:389 "DC=fabrikam,DC=com" /log adamsync.log

Again, we're assuming you are running the command on the AD LDS server and that the instance is running on port 389. The DN referenced in the command is the DN of your AD LDS application partition. /log is very important (you can name the log anything you want). You will need this log if there are any issues during the synchronization. The log will tell you which object failed and give you a cryptic "detailed" reason as to why. Below is an example of an error due to a duplicate UPN. This is one of the easier ones to understand.

====================================================
Processing Entry: Page 67, Frame 1, Entry 64, Count 1, USN 0
Processing source entry <guid=fe36238b9dd27a45b96304ea820c82d8>
Processing in-scope entry fe36238b9dd27a45b96304ea820c82d8.

Adding target object CN=BillyJoeBob,OU=User Accounts,dc=fabrikam,dc=com. Adding attributes: sourceobjectguid, objectClass, sn, description, givenName, instanceType, displayName, department, sAMAccountName, userPrincipalName, Ldap error occurred. ldap_add_sW: Attribute Or Value Exists. Extended Info: 0000217B: AtrErr: DSID-03050758, #1:
0: 0000217B: DSID-03050758, problem 1006 (ATT_OR_VALUE_EXISTS), data 0, Att 90290 (userPrincipalName)

. Ldap error occurred. ldap_add_sW: Attribute Or Value Exists. Extended Info: 0000217B: AtrErr: DSID-03050758, #1:
0: 0000217B: DSID-03050758, problem 1006 (ATT_OR_VALUE_EXISTS), data 0, Att 90290 (userPrincipalName)
===============================================

During the sync, if you are syncing from the Active Directory domain head rather than an OU or container, your objects should begin showing up in the AD LDS instance almost immediately. The objects don't synchronize in any order that makes sense to the human brain, so don't worry if objects are appearing in a random order. There is no progress bar or indication of how the sync is going other than fact that the log file is growing. When the sync completes you will be returned to the command prompt and your log file will stop growing.

Did it work?

As you can see there is nothing on the command line nor are there any events in any Windows event log that indicate that the synchronization was successful. In this context, successful means completed without errors and all objects in scope, as defined in the XML file, were synchronized. The only way to determine if the synchronization was successful is to check the log file. This highlights the importance of generating the log. Additionally, it's a good idea to keep a reasonable number of past logs so if the sync starts failing at some point you can determine approximately when it started occurring. Management likes to know things like this.

Since you'll probably be automating the synchronization (easy to do with a scheduled task) and not running it manually, it's a good idea to set up a reminder to periodically check the logs for issues. If you've never looked at a log before, it can be a little intimidating if there are a lot of objects being synchronized. The important thing to know is that if the sync was successful, the bottom of the log will contain a section similar to the one below:

Updating the configuration file DirSync cookie with a new value.

Beginning processing of deferred dn references.
Finished processing of deferred dn references.

Finished (successful) synchronization run.
Number of entries processed via dirSync: 16
Number of entries processed via ldap: 0
Processing took 4 seconds (0, 0).
Number of object additions: 3
Number of object modifications: 13
Number of object deletions: 0
Number of object renames: 2
Number of references processed / dropped: 0, 0
Maximum number of attributes seen on a single object: 9
Maximum number of values retrieved via range syntax: 0

Beginning aging run.
Aging requested every 0 runs. We last aged 2 runs ago.
Saving Configuration File on DC=instance1,DC=local
Saved configuration file.

If your log just stops without a section similar to the one above, then the last entry will indicate an error similar to the one above for the duplicate UPN.

Conclusion and other References

That covers the basics of setting up ADAMSync! I hope this information makes the process more straight forward and gives you some tips for getting it to work the first time! The most important point I can make is to start very simple with the XML file and get something to work. You can always add more attributes to the file later, but if you start from broken it can be difficult to troubleshoot. Also, I highly recommend using <include> over <exclude> when specifying attributes to synchronize. This may be more work for your application team since they will have to know what their application requires, but it will make setting up the XML file and getting a successful synchronization much easier!

ADAMSync excluded objects

As I mentioned earlier, there are some attributes, classes and object types that ADAMSync will not synchronize. The items listed below are hard-coded not to sync. There is no way around this using ADAMSync. If you need any of these items to sync, then you will need to use LDIFDE exports, FIM, or some other method to synchronize them from AD to AD LDS. The scenarios where you would require any of these items are very limited and some of them are dealt with within ADAMSync by converting the attribute to a new attribute name (objectGUID to sourceObjectGUID).

Attributes

cn, currentValue, dBCSPwd, fSMORoleOwner, initialAuthIncoming, initialAuthOutgoing, isCriticalSystemObject, isDeleted, lastLogonTimeStamp, lmPwdHistory, msDS-ExecuteScriptPassword, ntPwdHistory, nTSecurityDescriptor, objectCategory, objectSid (except when being converted to proxy), parentGUID, priorValue, pwdLastSet, sAMAccountType, sIDHistory, supplementalCredentials, supplementalCredentials, systemFlags, trustAuthIncoming, trustAuthOutgoing, unicodePwd, whenChanged

Classes

crossRef, secret, trustedDomain, foreignSecurityPrincipal, rIDSet, rIDManager

Other

Naming Context heads, deleted objects, empty attributes, attributes we do not have permissions to read, objectGUIDs (gets transferred to sourceObjectGUID), objects with del-mangeled distinguished names (DEL:\)

Additional Goodies

ADAMSync

AD LDS Replication

Misc Blogs

GOOD LUCK and ENJOY!

Kim "Sync or swim" Nichols

Revenge of Y2K and Other News

$
0
0

Hello sports fans!

So this has been a bit of a hectic time for us, as I'm sure you can imagine. Here's just some of the things that have been going on around here.

Last week, thanks to a failure on the time servers at USNO.NAVY.MIL, many customers experienced a time rollback to CY 2000 on their Active Directory domain controllers. Our team worked closely with the folks over at Premier Field Engineering to explain the problem, document resolutions for the various issues that might arise, and describe how to inoculate your DCs against a similar problem in the future. If you were affected by this problem then you need to read this post. If you weren't affected, and want to know why, then you need to read this post. Basically, we think you need to read this post. So...here's the link to the AskPFEPlat blog.

In other news, Ned Pyle has successfully infiltrated the Product Group and has started blogging on The Storage Team blog. His first post is up, and I'm sure there will be many more to follow. If you've missed Ned's rare blend of technical savvy and sausage-like prose, and you have an interest in Microsoft's DFSR and other storage technologies, then go check him out.

Finally...you've probably noticed the lack of activity here on the AskDS blog. Truthfully, that's been the result of a confluence of events -- Ned's departure, the Holiday season here in the US, and the intense interest in Windows 8 and Windows Server 2012 (and subsequent support calls). Never fear, however! I'm pleased to say that your questions to the blog have been coming in quite steadily, so this week I'll be posting an omnibus edition of the Mail Sack. We also have one or two more posts that will go up between now and the end of the year, so there's that to look forward to. Starting with the new calendar year, we'll get back to a semi-regular posting schedule as we get settled and build our queue of posts back up.

In the mean time, if you have questions about anything you see on the blog, don't hesitate to contact us.

Jonathan "time to make the donuts" Stephens

Intermittent Mail Sack: Must Remember to Write 2013 Edition

$
0
0

Hi all, Jonathan here again with the latest edition of the Intermittent Mail Sack. We've had some great questions over the last few weeks so I've got a lot of material to cover. This sack, we answer questions on:

Before we get started, however, I wanted to share information about a new service available to Premier customers through Microsoft Services Premier Support. Many Premier customers will be familiar with the Risk Assessment Program (RAP). Premier Support is now rolling out an online offering called the RAP as a Service (or RaaS for short). Our colleagues over on the Premier Field Engineering (PFE) blog have just posted a description of the new offering, and I encourage you to check it out. I've been working on the Active Directory RaaS offering since the early beta, and we've gotten really good feedback. Unfortunately, the offering is not yet available to non-Premier customers; look at RaaS as yet one more benefit to a Premier Support contract.

 

Now on to the Mail Sack!

Question

I'm considering upgrading my DFSR hub servers to Server 2012. Is there anything I should know before I hit the easy button and do an upgrade?

Answer

The most important thing to note is that Microsoft strongly discourages mixing Windows Server 2012 and legacy operating system DFSR. You just mentioned upgrading your hub servers, and make no mention of any branch servers. If you're going to upgrade your DFSR servers then you should upgrade all of them.

Check out Ned's post over on the FileCab blog: DFS Replication Improvements in Windows Server. Specifically, review the section that discusses Dynamic Access Control Support.

Also, there is a minor issue that has been found that we are still tracking. When you upgrade from Windows Server 2008 R2 to Windows Server 2012 the DFS Management snap-in stops working. The workaround is to just uninstall and then reinstall the DFS Management tools:

You can also do this with PowerShell:

Uninstall-WindowsFeature -name RSAT-DFS-Mgmt-Con
Install-WindowsFeature -name RSAT-DFS-Mgmt-Con

 

Question

From our SharePoint site, when users click on log-off then they get sent to this page: https://your_sts_server/adfs/ls/?wa=wsignout1.0.

We configured the FedAuth cookie to be session based after we did this:

$sts = Get-SPSecurityTokenServiceConfig 
$sts.UseSessionCookies = $true 
$sts.Update() 

 

The problem is, unless the user closes all their browsers then when they go to the log-in page the browser remembers their credentials. This is not acceptable for some PC's are shared by people. Also, closing all browsers is not acceptable as they run multiple web applications.

Answer

(Courtesy of Adam Conkle)

Great question! I hope the following details help you in your deployment:

Moving from a persistent cookie to a session cookie with SharePoint 2010 was the right move in this scenario in order to guarantee that closing the browser window would terminate the session with SharePoint 2010.

When you sign out via SharePoint 2010 and are redirected to the STS URL containing the query string: wa=wsignout1.0, this is what we call a WS-Federation sign-out request. This call is sufficient for signing out of the STS as well as all relying parties signed into during the session.

However, what you are experiencing is expected behavior for how Integrated Windows Authentication (IWA) works with web browsers. If your web browser client experienced either a no-prompt sign-in (using Kerberos authentication for the currently signed in user), or NTLM, prompted sign-in (provided credentials in a Windows Authentication "401" credential prompt), then the browser will remember the Windows credentials for that host for the duration of the browser session.

If you were to collect a HTTP headers trace (Fiddler, HTTPWatch, etc.) of the current scenario, you will see that the wa=wsignout1.0 request is actually causing AD FS and SharePoint 2010 (and any other RPs involved) to clean up their session cookies (MSISAuth and FedAuth) as expected. The session is technically ending the way it should during sign-out. However, if the client keeps the current browser session open, browsing back to the SharePoint site will cause a new WS-Federation sign-in request to be sent to AD FS (wa=wsignin1.0). When the sign-in request is sent to AD FS, AD FS will attempt to collect credentials with a HTTP 401, but, this time, the browser has a set of Windows credentials ready to provide to that host.

The browser provides those Windows credentials without a prompt shown to the user, and the user is signed back into AD FS, and, thus, is signed back into SharePoint 2010. To the naked eye, it appears that sign-out is not working properly, while, in reality, the user is signing out and then signing back in again.

To conclude, this is by-design behavior for web browser clients. There are two workarounds available:

Workaround 1

Switch to forms-based authentication (FBA) for the AD FS Federation Service. The following article details this quick and easy process: AD FS 2.0: How to Change the Local Authentication Type

Workaround 2

Instruct your user base to always close their web browser when they have finished their session

Question

Are the attributes for files and folders used by Dynamic Access Control are replicated with the object? That is, using DFSR, if I replicate the file to another server which uses the same policy will the file have the same effective permissions on it?

Answer

(Courtesy of Mike Stephens)

Let me clarify some aspects of your question as I answer each part

When enabling Dynamic Access Control on files and folders there are multiple aspects to consider that are stored on the files and folders.

Resource Properties

Resource Properties are defined in AD and used as a template to stamp additional metadata on a file or folder that can be used during an authorization decision. That information is stored in an alternate data stream on the file or folder. This would replicate with the file, the same as the security descriptor.

Security Descriptor

The security descriptor replicates with the file or folder. Therefore, any conditional expression would replicate in the security descriptor.

All of this occurs outside of Dynamic Access Control -- it is a result of replicating the file throughout the topology, for example, if using DFSR. Central Access Policy has nothing to do with these results.

Central Access Policy

Central Access Policy is a way to distribute permissions without writing them directly to the DACL of a security descriptor. So, when a Central Access Policy is deployed to a server, the administrator must then link the policy to a folder on the file system. This linking is accomplished by inserting a special ACE in the auditing portion of the security descriptor informs Windows that the file/folder is protected by a Central Access Policy. The permissions in the Central Access Policy are then combined with Share and NTFS permissions to create an effective permission.

If the a file/folder is replicated to a server that does not have the Central Access Policy deployed to it then the Central Access Policy is not valid on that server. The permissions would not apply.

Question

I read the post located here regarding the machine account password change in Active Directory.

Based on what I read, if I understand this correctly, the machine password change is generated by the client machine and not AD. I have been told, (according to this post, inaccurately) that AD requires this password reset or the machine will be dropped from the domain.

I am a Macintosh systems administrator, and as you probably know, this issue does indeed occur on Mac systems.

I have reset the password reset interval to be various durations from fourteen days which is the default, to one day.

I have found that if I disjoin and rejoin the machine to the domain it will generate a new password and work just fine for 30 days. At that time, it will be dropped from the domain and have to be rejoined. This is not 100% of the time, however it is often enough to be a problem for us as we are a higher education institution which in addition to our many PCs, also utilizes a substantial number of Macs. Additionally, we have a script which runs every 60 days to delete machine accounts from AD to keep it clean, so if the machine has been turned off for more than 60 days, the account no longer exists.

I know your forte is AD/Microsoft support, however I was hoping that you might be able to offer some input as to why this might fail on the Macs and if there is any solution which we could implement.

Other Mac admins have found workarounds like eliminating the need for the pw reset or exempting the macs from the script, but our security team does not want to do this.

Answer

(Courtesy of Mike Stephens)

Windows has a security policy feature named Domain member: Disable machine account password change, which determines whether the domain member periodically changes its computer account password. Typically, a mac, linux, or unix operating system uses some version of Samba to accomplish domain interoperability. I'm not familiar with these on the mac; however, in linux, you would use the command

Net ads changetrustpw 

 

By default, Windows machines initiate a computer password change every 30 days. You could schedule this command to run every 30 days once it completes successfully. Beyond that, basically we can only tell you how to disable the domain controller from accepting computer password changes, which we do not encourage.

Question

I recently installed a new server running Windows 2008 R2 (as a DC) and a handful of client computers running Windows 7 Pro. On a client, which is shared by two users (userA and userB), I see the following event on the Event Viewer after userA logged on.

Event ID: 45058 
Source: LsaSrv 
Level: Information 
Description: 
A logon cache entry for user userB@domain.local was the oldest entry and was removed. The timestamp of this entry was 12/14/2012 08:49:02. 

 

All is working fine. Both userA and userB are able to log on on the domain by using this computer. Do you think I have to worry about this message or can I just safely ignore it?

Fyi, our users never work offline, only online.

Answer

By default, a Windows operating system will cache 10 domain user credentials locally. When the maximum number of credentials is cached and a new domain user logs onto the system, the oldest credential is purged from its slot in order to store the newest credential. This LsaSrv informational event simply records when this activity takes place. Once the cached credential is removed, it does not imply the account cannot be authenticated by a domain controller and cached again.

The number of "slots" available to store credentials is controlled by:

Registry path: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon
Setting Name: CachedLogonsCount
Data Type: REG_SZ
Value: Default value = 10 decimal, max value = 50 decimal, minimum value = 1

Cached credentials can also be managed with group policy by configuring:

Group Policy Setting path: Computer Configuration\Policies\Windows Settings\Security Settings\Local Policies\Security Options.
Group Policy Setting: Interactive logon: Number of previous logons to cache (in case domain controller is not available)

The workstation the user must have physical connectivity with the domain and the user must authenticate with a domain controller to cache their credentials again once they have been purged from the system.

I suspect that your CachedLogonsCount value has been set to 1 on these clients, meaning that that the workstation can only cache one user credential at a time.

Question

In Windows 7 and Server 2008 Kerberos DES encryption is disabled by default.

At what point will support for DES Kerberos encryption be removed? Does this happen in Windows 8 or Windows Server 2012, or will it happen in a future version of Windows?

Answer

DES is still available as an option on Windows 8 and Windows Server 2012, though it is disabled by default. It is too early to discuss the availability of DES in future versions of Windows right now.

There was an Advisory Memorandum published in 2005 by the Committee on National Security Systems (CNSS) where DES and all DES-based systems (3DES, DES-X) would be retired for all US Government uses by 2015. That memorandum, however, is not necessarily a binding document. It is expected that 3DES/DES-X will continue to be used in the private sector for the foreseeable future.

I'm afraid that we can't completely eliminate DES right now. All we can do is push it to the back burner in favor of newer and better algorithms like AES.

Question

I have two Issuing certification authorities in our corporate network. All our approved certificate templates are published on both issuing CAs. We would like to enable certificate renewals from Internet with our Internet-facing CEP/CES configured for certificate authentication in Certificate Renewal Mode Only. What we understand from the whitepaper is that it's not going to work when the CA that issues the certificate must be the same CA used for certificate renewal.

Answer

First, I need to correct an assumption made based on your reading of the whitepaper. There is no requirement that, when a certificate is renewed, the renewal request be sent to the same CA as that that issued the original certificate. This means that your clients can go to either enrollment server to renew the certificate. Here is the process for renewal:

  1. When the user attempts to renew their certificate via the MMC, Windows sends a request to the Certificate Enrollment Policy (CEP) server URL configured on the workstation. This request includes the template name of the certificate to be renewed.
  2. The CEP server queries Active Directory for a list of CAs capable of issuing certificates based on that template. This list will include the Certificate Enrollment Web Service (CES) URL associated with that CA. Each CA in your environment should have one or more instances of CES associated with it.
  3. The list of CES URLs is returned to the client. This list is unordered.
  4. The client randomly selects a URL from the list returned by the CEP server. This random selection ensures that renewal requests are spread across all returned CAs. In your case, if both CAs are configured to support the same template, then if the certificate is renewed 100 times, either with or without the same key, then that should result in a nearly 50/50 distribution between the two CAs.

The behavior is slightly different if one of your CAs goes down for some reason. In that case, should clients encounter an error when trying to renew a certificate against one of the CES URIs then the client will failover and use the next CES URI in the list. By having multiple CAs and CES servers, you gain high availability for certificate renewal.

Other Stuff

I'm very sad that I didn't see this until after the holidays. It definitely would have been on my Christmas list. A little pricey, but totally geek-tastic.

This was also on my list, this year. Go Science!

Please do keep those questions coming. We have another post in the hopper going up later in the week, and soon I hope to have some Windows Server 2012 goodness to share with you. From all of us on the Directory Services team, have a happy and prosperous New Year!

Jonathan "13th baktun" Stephens

 

 

ADAMSync + (AD Recycle Bin OR searchFlags) = "FUN"

$
0
0

Hello again ADAMSyncers! Kim Nichols here again with what promises to be a fun and exciting mystery solving adventure on the joys of ADAMSync and AD Recycle Bin (ADRB) for AD LDS. The goal of this post is two-fold:

  1. Explain AD Recycle Bin for AD LDS and how to enable it
  2. Highlight an issue that you may experience if you enable AD Recycle Bin for AD LDS and use ADAMSync

I'll start with some background on AD Recycle Bin for AD LDS and then go through a recent mind-boggling scenario from beginning to end to explain why you may not want (or need) to enable AD Recycle Bin if you are planning on using ADAMSync.

Hold on to your hats!

AD Recycle Bin for ADLDS

If you're not familiar with AD Recycle Bin and what it can do for you, check out Ned's prior blog posts or the content available on TechNet.

The short version is that AD Recycle Bin is a feature added in Windows Server 2008 R2 that allows Administrators to recover deleted objects without restoring System State backups and performing authoritative restores of those objects.  

Requirements for AD Recycle Bin

   

To enable AD Recycle Bin (ADRB) for AD DS your forest needs to meet some basic requirements:

   

  1. Have extended your schema to Windows Server 2008 R2.
  2. Have only Windows Server 2008 R2 DC's in your forest.
  3. Raise your domain(s) functional level to Windows Server 2008 R2.
  4. Raise your forest's functional level to Windows Server 2008 R2.

   

What you may not be aware of is that AD LDS has this feature as well. The requirements for implementing ADRB in AD LDS are the same as AD DS although they are not as intuitive for AD LDS instances.

 

Schema must be Windows Server 2008 R2

   

If your AD LDS instance was originally built as an ADAM instance, then you may or may not have extended the schema of your instance to Windows Server 2008 R2. If not, upgrading the schema is a necessary first step in order to support ADRB functionality.

   

To update your AD LDS schema to Windows Server 2008 R2, run the following command from your ADAM installation directory on your AD LDS server:

   

Ldifde.exe –i –f MS-ADAM-Upgrade-2.ldf –s server:port –b username domain password –j . -$ adamschema.cat

   

You'll also want to update your configuration partition:

   

ldifde –i –f ms-ADAM-Upgrade-1.ldf –s server:portnumber –b username domain password –k –j . –c "CN=Configuration,DC=X" #configurationNamingContext

   

Information on these commands can be found on TechNet:

Decommission any Windows Server 2003 ADAM servers in the Replica set

   

In an AD DS environment, ADRB requires that all domain controllers in the forest be running Windows Server 2008 R2. Translating this to an AD LDS scenario, all servers in your replica set must be running Windows Server 2008 R2. So, if you've been hanging on to those Windows Server 2003 ADAM servers for some reason, now is the time to decommission them.

 

LaNae's blog "How to Decommission an ADAM/ADLDS server and Add Additional Servers" explains the process for removing a replica member. The process is pretty straightforward and just involves uninstalling the instance, but you will want to check FSMO role ownership, overall instance health, and application configurations before blindly uninstalling. Now is not the time to discover applications have been hard-coded to point to your Windows Server 2003 server or that you've been unknowingly been having replication issues.

   

Raise the functional level of the instance

   

In AD DS, raising the domain and forest functional levels is easy; there's a UI -- AD Domains and Trusts. AD LDS doesn't have this snap-in, though, so it is a little more complicated. There's a good KB article (322692) that details the process of raising the functional levels of AD and gives us insight into what we need to do raise our AD LDS functional level since we can't use the AD Domains and Trusts MMC.

   

AD LDS only has the concept of forest functional levels. There is no domain functional level in AD LDS. The forest functional level is controlled by the msDS-Behavior-Version attribute on the CN=Partitions object in the Configuration naming context of your AD LDS instance.

   

   

Simply changing the value of msDS-Behavior-Version from 2 to 4 will update the functional level of your instance from Windows Server 2003 to Windows Server 2008 R2. Alternatively, you can use Windows PowerShell to upgrade the functional level of your AD LDS instance. For AD DS, there is a dedicated Windows PowerShell cmdlet for raising the forest functional level called Set-ADForestMode, but this cmdlet is not supported for AD LDS. To use Windows PowerShell to raise the functional level for AD LDS, you will need to use the Set-ADObject cmdlet to specify the new value for the msDS-Behavior-Version attribute.

   

To raise the AD LDS functional level using Windows PowerShell, run the following command (after loading the AD module):

   

Set-ADObject -Identity <path to Partitions container in Configuration Partition of instance> -Replace @{'msds-Behavior-Version'=4} -Server <server:port>

   

For example in my environment, I ran:

   

Set-ADObject -Identity 'CN=Partitions,CN=Configuration,CN={A1D2D2A9-7521-4068-9ACC-887EDEE90F91}' -Replace @{'msDS-Behavior-Version'=4} -Server 'localhost:50000'

   

 

 

   

   

As always, before making changes to your production environment:

  1. Test in a TEST or DEV environment
  2. Have good back-ups
  3. Verify the general health of the environment (check replication, server health, etc)

   

Now we're ready to enable AD Recycle Bin! 

Enabling AD Recycle Bin for AD LDS

   

For Windows Server 2008 R2, the process for enabling ADRB in AD LDS is nearly identical to that for AD DS. Either Windows PowerShell or LDP can be used to enable the feature. Also, there is no UI for enabling ADRB for AD LDS in Windows Server 2008 R2 or Windows Server 2012. Windows Server 2012 does add the ability to enable ADRB and restore objects through the AD Administrative Center for AD DS (you can read about it here), but this UI does not work for AD LDS instances on Windows Server 2012.

   

Once the feature is enabled, it cannot be disabled. So, before you continue, be certain you really want to do this. (Read this whole post to help you decide.)

   

The ADRB can be enabled in both AD DS and AD LDS using a PowerShell cmdlet, but the syntax is slightly different between the two. The difference is fully documented in TechNet.

   

In my lab, I used the PowerShell cmdlet to enable the feature rather than using LDP. Below is the syntax for AD LDS:

   

Enable-ADOptionalFeature 'recycle bin feature' -Scope ForestOrConfigurationSet -Server <server:port> -Target <DN of configuration partition>

   

Here's the actual cmdlet I used and a screenshot of the output. The cmdlet asks you confirm that you want to enable the feature since this is an irreversible process.

   

 

 

   

You can verify that the command worked by checking the msDS-EnabledFeature attribute on the Partitions container of the Configuration NC of your instance.

   

 

Seemed like a good idea at the time. . .

   

Now, on to what prompted this post in the first place.

   

Once ADRB is enabled, there is a change to how deleted objects are handled when they are removed from the directory. Prior to enabling ADRB when an object is deleted, it is moved to the Deleted Objects container within the application partition of your instance (CN=Deleted Objects, DC=instance1, DC=local or whatever the name of your instance is) and most of the attributes are deleted. Without Recycle Bin enabled, a user object in the Deleted Object container looks like this in LDP:

   

   

After enabling ADRB, a deleted user object looks like this in LDP:

   

   

Notice that after enabling ADRB, givenName, displayName, and several other attributes including userPrincipalName (UPN) are maintained on the object while in the Deleted Objects container. This is great if you ever need to restore this user: most of the data is retained and it's a pretty simple process using LDP or PowerShell to reanimate the object without the need to go through the authoritative restore process. But, retaining the UPN attribute specifically can cause issues if ADAMSync is being used to synchronize objects from AD DS to AD LDS since the userPrincipalName attribute must be unique within an AD LDS instance.

   

In general, the recommendation when using ADAMSync, is to perform all user management (additions/deletions) on the AD DS side of the sync and let the synchronization process handle the edits in AD LDS. There are times, though, when you may need to remove users in AD LDS in order to resolve synchronization issues and this is where having ADRB enabled will cause problems.

   

For example:

   

Let's say that you discover that you have two users with the same userPrincipalName in AD and this is causing issues with ADAMSync: the infamous ATT_OR_VALUE_EXISTS error in the ADAMSync log.

   

====================================================

Processing Entry: Page 67, Frame 1, Entry 64, Count 1, USN 0 Processing source entry <guid=fe36238b9dd27a45b96304ea820c82d8> Processing in-scope entry fe36238b9dd27a45b96304ea820c82d8.

   

Adding target object CN=BillyJoeBob,OU=User Accounts,dc=fabrikam,dc=com. Adding attributes: sourceobjectguid, objectClass, sn, description, givenName, instanceType, displayName, department, sAMAccountName, userPrincipalName, Ldap error occurred. ldap_add_sW: Attribute Or Value Exists. Extended Info: 0000217B: AtrErr: DSID-03050758, #1:

0: 0000217B: DSID-03050758, problem 1006 (ATT_OR_VALUE_EXISTS), data 0, Att 90290 (userPrincipalName)

   

. Ldap error occurred. ldap_add_sW: Attribute Or Value Exists. Extended Info: 0000217B: AtrErr: DSID-03050758, #1:

0: 0000217B: DSID-03050758, problem 1006 (ATT_OR_VALUE_EXISTS), data 0, Att 90290 (userPrincipalName)

===============================================

   

   

Upon further inspection of the users, you determine that at some point a copy was made of the user's account in AD and the UPN was not updated. The old account is not needed anymore but was never cleaned up either. To get your ADAMSync working, you:

  1. Delete the user account that synced to AD LDS.
  2. Delete the extra account in AD (or update the UPN on one of the accounts).
  3. Try to sync again

   

BWAMP!

   

The sync still fails with the ATT_OR_VALUE_EXISTS error on the same user. This doesn't make sense, right? You deleted the extra user in AD and cleaned up AD LDS by deleting the user account there. There should be no duplicates. The ATT_OR_VALUE_EXISTS error is not an ADAMSync error. ADAMSync is making LDAP calls to the AD LDS instance to create or modify objects. This error is an LDAP error from the AD LDS instance and is telling you already have an object in the directory with that same userPrincipalName. For what it's worth, I've never seen this error logged if the duplicate isn't there. It is there; you just have to find it!

   

At this point, it's not hard to guess where the duplicate is coming from, since we've already discussed ADRB and the attributes maintained on deletion. The duplicate userPrincipalName is coming from the object we deleted from the AD LDS instance and is located in the Deleted Objects container. The good news is that LDP allows you to browse the container to find the deleted object. If you've never used LDP before to look through the Deleted Objects container, TechNet provides information on how to browse for deleted objects via LDP.

 

It's great that we know why we are having the problem, but how do we fix it? Now that we're already in this situation, the only way to fix it is to eliminate the duplicate UPN from the object in CN=Deleted Objects. To do this:

   

  1. Restore the deleted object in AD LDS using LDP or PowerShell
  2. After the object is restored, modify the UPN to something bogus that will never be used on a real user
  3. Delete the object again
  4. Run ADAMSync again

   

Now your sync should complete successfully!

Not so fast, you say . . .

   

So, I was feeling pretty good about myself on this case. I spent hours figuring out ADRB for AD LDS and setting up the repro in my lab and proving that deleting objects with ADRB enabled could cause ATT_OR_VALUE_EXISTS errors during ADAMSync. I was already patting myself on the back and starting my victory lap when I got an email back from my customer stating the msDS-BehaviorVersion attribute on their AD LDS instance was still set to 2.

   

Huh?!

   

I'll admit it, I was totally confused. How could this be? I had LDP output from the customer's AD LDS instance and could see that the userPrincipalName attribute was being maintained on objects in the Deleted Objects container. I knew from my lab that this is not normal behavior when ADRB is disabled. So, what the heck is going on?

   

I know when I'm beat, so decided to use one of my "life lines" . . . I emailed Linda Taylor. Linda is an Escalation Engineer in the UK Directory Services team and has been working with ADAM and AD LDS much longer than I have. This is where I should include a picture of Linda in a cape because she came to the rescue again!

   

Apparently, there is more than one way for an attribute to be maintained on deletion. The most obvious was that ADRB had been enabled. The less obvious requires a better understanding of what actually happens when an object is deleted. Transformation into a Tombstone documents this process in more detail. The part that is important to us is:

The Schema Management snap-in doesn't allow us to see attributes on attributes, so to verify the value of searchFlags on the userPrincipalName attribute we need to ADSIEdit or LDP.

   

WARNING: Modifying the schema can have unintended consequences. Please be certain you really need to do this before proceeding and always test first!

   

By default, the searchFlags attribute on userPrincipalName should be set to 0x1 (INDEX).

   

   

   

My customer's searchFlags attribute was set to 0x1F (31 decimal) = (INDEX |CONTAINER_INDEX |ANR |PRESERVE_ON_DELETE |COPY).

   

   

Apparently these changes to the schema had been made to improve query efficiency when searching on the userPrincipalName attribute.

 

Reminder: Manually modifying the schema in this way is not something you should doing unless are certain you know what you are doing or have been directed to by Microsoft Support.

 

The searchFlags attribute is a bitwise attribute containing a number of different options which are outlined here. This attribute can be zero or a combination of one or more of the following values:

Value

Description

1 (0x00000001)

Create an index for the attribute.

2 (0x00000002)

Create an index for the attribute in each container.

4 (0x00000004)

Add this attribute to the Ambiguous Name Resolution (ANR) set. This is used to assist in finding an object when only partial information is given. For example, if the LDAP filter is (ANR=JEFF), the search will find each object where the first name, last name, email address, or other ANR attribute is equal to JEFF. Bit 0 must be set for this index take affect.

8 (0x00000008)

Preserve this attribute in the tombstone object for deleted objects.

16 (0x00000010)

Copy the value for this attribute when the object is copied.

32 (0x00000020)

Supported beginning with Windows Server 2003. Create a tuple index for the attribute. This will improve searches where the wildcard appears at the front of the search string. For example, (sn=*mith).

64(0x00000040)

Supported beginning with ADAM. Creates an index to greatly help VLV performance on arbitrary attributes.

   

To remove the PRESERVE_ON_DELETE flag, we subtracted 8 from customer's value of 31, which gave us a value of 23 (INDEX | CONTAINER | ANR | COPY). 

 

Once we removed the PRESERVE_ON_DELETE flag, we created and deleted a test account to confirm our modifications changed the tombstone behavior of the userPrincipalName attribute. UPN was no longer maintained!

   

Mystery solved!! I think we all deserve a Scooby Snack now!

   

Nom nom nom!

 

Lessons learned

   

  1. ADRB is a great feature for AD. It can even be useful for AD LDS if you aren't synchronizing with AD. If you are synchronizing with AD, then the benefits of ADRB are limited and in the end it can cause you more problems than it solves.
  2. Manually modifying the schema can have unintended consequences.
  3. PowerShell for AD LDS is not as easy as AD
  4. AD Administrative Center is for AD and not AD LDS
  5. LDP Rocks!

   

This wraps up the "More than you really ever wanted to know about ADAMSync, ADRB & searchFlags" Scooby Doo edition of AskDS. Now, go enjoy your Scooby Snacks!

 
- Kim "That Meddling Kid" Nichols

   

   

Configuring Change Notification on a MANUALLY created Replication partner

$
0
0

Hello. Jim here again to elucidate on the wonderment of change notification as it relates to Active Directory replication within and between sites. As you know Active Directory replication between domain controllers within the same site (intrasite) happens instantaneously. Active Directory replication between sites (intersite) occurs every 180 minutes (3 hours) by default. You can adjust this frequency to match your specific needs BUT it can be no faster than fifteen minutes when configured via the AD Sites and Services snap-in.

Back in the old days when remote sites were connected by a string and two soup cans, it was necessary in most cases to carefully consider configuring your replication intervals and times so as not to flood the pipe (or string in the reference above) with replication traffic and bring your WAN to a grinding halt. With dial up connections between sites it was even more important. It remains an important consideration today if your site is a ship at sea and your only connectivity is a satellite link that could be obscured by a cloud of space debris.

Now in the days of wicked fast fiber links and MPLS VPN Connectivity, change notification may be enabled between site links that can span geographic locations. This will make Active Directory replication instantaneous between the separate sites as if the replication partners were in the same site. Although this is well documented on TechNet and I hate regurgitating existing content, here is how you would configure change notification on a site link:

  1. Open ADSIEdit.msc.
  2. In ADSI Edit, expand the Configuration container.
  3. Expand Sites, navigate to the Inter-Site Transports container, and select CN=IP.

    Note: You cannot enable change notification for SMTP links.
  4. Right-click the site link object for the sites where you want to enable change notification, e.g. CN=DEFAULTSITELINK, click Properties.
  5. In the Attribute Editor tab, double click on Options.
  6. If the Value(s) box shows <not set>, type 1.

There is one caveat however. Change notification will fail with manual connection objects. If your connection objects are not created by the KCC the change notification setting is meaningless. If it's a manual connection object, it will NOT inherit the Options bit from the Site Link. Enjoy your 15 minute replication latency.

Why would you want to keep connection objects you created manually, anyway? Why don't you just let the KCC do its thing and be happy? Maybe you have a Site Link costing configuration that you would rather not change. Perhaps you are at the mercy of your networking team and the routing of your network and you must keep these manual connections. If, for whatever reason you must keep the manually created replication partners, be of good cheer. You can still enjoy the thrill of change notification.

Change Notification on a manually created replication partner is configured by doing the following:

  1. Open ADSIEDIT.msc.
  2. In ADSI Edit, expand the Configuration container.
  3. Navigate to the following location:

    \Sites\SiteName\Server\NTDS settings\connection object that was manually created
  4. Right-click on the manually created connection object name.
  5. In the Attribute Editor tab, double click on Options.
  6. If the value is 0 then set it to 8.

If the value is anything other than zero, you must do some binary math. Relax; this is going to be fun.

On the Site Link object, it's the 1st bit that controls change notification. On the Connection object, however, it's the 4th bit. The 4th bit is highlighted in RED below represented in binary (You remember binary don't you???)

Binary Bit

8th

7th

6th

5th

4th

3rd

2nd

1st

Decimal Value

128

64

32

16

8

4

2

1

 

NOTE: The values represented by each bit in the Options attribute are documented in the Active Directory Technical Specification. Fair warning! I'm only including that information for the curious. I STRONGLY recommend against setting any of the options NOT discussed specifically in existing documentation or blogs in your production environment.

Remember what I said earlier? If it's a manual connection object, it will NOT inherit the Options value from the Site Link object. You're going to have to enable change notifications directly on the manually created connection object.

Take the value of the Options attribute, let's say it is 16.

Open Calc.exe in Programmer mode, and paste the contents of your options attribute.

Click on Bin, and count over to the 4th bit starting from the right.

That's the bit that controls change notification on your manually created replication partner. As you can see, in this example it is zero (0), so change notifications are disabled.

Convert back to decimal and add 8 to it.

Click on Bin, again.

As you can see above, the bit that controls change notification on the manually created replication partner is now 1. You would then change the Options value in ADSIEDIT from 16 to 24.

Click on Ok to commit the change.

Congratulations! You have now configured change notification on your manually created connection object. This sequence of events must be repeated for each manually created connection object that you want to include in the excitement and instantaneous gratification of change notification. Keep in mind that in the event something (or many things) gets deleted from a domain controller, you no longer have that window of intersite latency to stop inbound replication on a downstream partner and do an authoritative restore. Plan the configuration of change notifications accordingly. Make sure you take regular backups, and test them occasionally!

And when you speak of me, speak well…

Jim "changes aren't permanent, but change is" Tierney

 

Distributed File System Consolidation of a Standalone Namespace to a Domain-Based Namespace

$
0
0

Hello again everyone! David here to discuss a scenario that is becoming more and more popular for administrators of Distributed File System Namespaces (DFSN): consolidation of one or more standalone namespaces that are referenced by a domain-based namespace. Below I detail how this may be achieved.

History: Why create interlinked namespaces?

First, we should quickly review the history of why so many administrators designed interlinked namespaces.

In Windows Server 2003 (and earlier) versions of DFSN, domain-based namespaces were limited to hosting approximately 5,000 DFS folders per namespace. This limitation was simply due to how the Active Directory JET database engine stored a single binary value of an attribute. We now refer to this type of namespace as "Windows 2000 Server Mode". Standalone DFS namespaces (those stored locally in the registry of a single namespace server or server cluster) are capable of approximately 50,000 DFS folders per namespace. Administrators would therefore create thousands of folders in a standalone namespace and then interlink (cascade) it with a domain-based namespace. This allowed for a single, easily identifiable entry point of the domain-based namespace and leveraged the capacity of the standalone namespaces.

"Windows Server 2008 mode" namespaces allow for domain-based namespaces of many thousands of DFS folders per namespace (look here for scalability test results). With many Active Directory deployments currently capable of supporting 2008 mode namespaces, Administrators are wishing to remove their dependency on the standalone namespaces and roll them up into a single domain-based namespace. Doing so will improve referral performance, improve fault-tolerance of the namespace, and ease administration.

How to consolidate the namespaces

Below are the steps required to consolidate one or more standalone namespaces into an existing domain-based namespace. The foremost goal of this process is to maintain identical UNC paths after the consolidation so that no configuration changes are needed for clients, scripts, or anything else that references the current interlinked namespace paths. Because so many design variations exist, you may only require a subset of the operations or you may have to repeat some procedures multiple times. If you are not concerned with maintaining identical UNC paths, then this blog does not really apply to you.

For demonstration purposes, I will perform the consolidation steps on a namespace with the following configuration:

Domain-based Namespace: \\tailspintoys.com\data
DFS folder: "reporting" (targeting the standalone namespace "reporting" below)
Standalone Namespace: \\server1\reporting
DFS folders: "report####" (totaling 10,000 folders)

Below is what these namespaces look like in the DFS Management MMC.

Domain Namespace DATA:

Standalone Namespace "Reporting" hosted by server "FS1" and has 15,000 DFS folders:

 

For a client to access a file in the "report8000" folder in the current DFS design, the client must access the following path:
\\tailspintoys.com\data\reporting\report8000


Below are the individual elements of that UNC path with descriptions below each:                    

\\tailspintoys.com

\Data

\Reporting

\Reporting8000

Domain

Domain-based Namespace

Domain-Based Namespace folder

 
  

Standalone Namespace

Standalone Namespace folder targeting a file server share


Note the overlap of the domain-based namespace folder "reporting" (dark green) with the standalone namespace "reporting" (light green). Each item in the UNC path is separated by a "\" and is known as a "path component".

In order to preserve the UNC path using a single domain-based namespace we must leverage the ability for DFSN to host multiple path components within a single DFS folder. Currently, the "reporting" DFS folder of the domain-based namespace refers clients to the standalone namespace that contains DFS folders, such as "reporting8000", beneath it. To consolidate those folders of the standalone root to the domain-based namespace, we must merge them together.

To illustrate this, below is how the new consolidated "Data" domain-based namespace will be structured for this path:

\\tailspintoys.com

\Data

\Reporting\Reporting8000

Domain

Domain-based Namespace

Domain-based Namespace folder targeting a file server share


Notice how the name of the DFS folder is "Reporting\Reporting8000" and includes two path components separated by a "\". This capability of DFSN is what allows for the creation of any desired path. When users access the UNC path, they ultimately will still be referred to the target file server(s) containing the shared data. "Reporting" is simply a placeholder serving to maintain that original path component.

Step-by-step

Below are the steps and precautions for consolidating interlinked namespaces. It is highly recommended to put a temporary suspension on any administrative changes to the standalone namespace(s).

Assumptions:
The instructions assume that you have already met the requirements for "Windows Server 2008 mode" namespaces and your domain-based namespace is currently running in "Windows 2000 Server mode".

However, if you have not met these requirements and have a "Windows 2000 Server mode" domain-based namespace, these instructions (with modifications) may still be applied *if* after consolidation the domain-based namespace configuration data is less than 5 MB in size. If you are unsure of the size, you may run the "dfsutil /root:\\<servername>\<namespace_name> /view" command against the standalone namespace and note the size listed at the top (or bottom) of the output. The reported size will be added to the current size of the domain-based namespace and must not exceed 5 MB. Cease any further actions if you are unsure, or test the operations in a lab environment. Of course, if your standalone namespace size was less than 5 MB in size, then why did you create a interlinked namespace to begin with? Eh…I'm not really supposed to ask these questions. Moving on…

Step 1

Export the standalone namespace.

Dfsutil root export \\fs1\reporting c:\exports\reporting_namespace.txt

Step 2

Modify the standalone namespace export file using a text editor capable of search-and-replace operations. Notepad.exe has this capability. This export file will be leveraged later to create the proper folders within the domain-based namespace.

Replace the "Name" element of the standalone namespace with the name of the domain-based namespace and replace the "Target" element to be the UNC path of the domain-based namespace server (the one you will be configuring later in step 6). Below, I highlighted the single "\\FS1\reporting" 'name' element that will be replaced with "\\TAILSPINTOYS.COM\DATA". The single "\\FS1\reporting" element immediately below it will be replaced with "\\DC1\DATA" as "DC1" is my namespace server.


Next, prepend "Reporting\" to the folder names listed in the export. The final result will be as follows:

One trick is to utilize the 'replace' capability of Notepad.exe to search out and replace all instances of the '<Link Name="' string with '<Link Name="folder\' ('<Link Name="Reporting\' in this example). The picture below shows the original folders defined and the 'replace' dialog responsible for changing the names of the folders (click 'Replace all' to replace all occurrences).


Save the modified file with a new filename (reporting_namespace_modified.txt) so as to not overwrite the standalone namespace export file.

Step 3

Export the domain-based namespace
dfsutil root export \\tailspintoys.com\data c:\exports\data_namespace.txt

Step 4

Open the output file from Step 3 and delete the link that is being consolidated ("Reporting"):

Save the file as a separate file (data_namespace_modified.txt). This export will be utilized to recreate the *other* DFS folders within the "Windows Server 2008 Mode" domain-based namespace that do not require consolidation.

Step 5

This critical step involves deleting the existing domain-based namespace. This is required for the conversion from "Windows 2000 Server Mode" to "Windows Server 2008 Mode".

Delete the domain-based namespace ("DATA" in this example).

Step 6

Recreate the "DATA" namespace, specifying the mode as "Windows Server 2008 mode". Specify the namespace server to be a namespace server with close network proximity to the domain's PDC. This will significantly decrease the time it takes to import the DFS folders. Additional namespace servers may be added any time after Step 8.

Step 7

Import the modified export file created in Step 4:
dfsutil root import merge data_namespace_modified.txt \\tailspintoys.com\data

In this example, this creates the "Business" and "Finance" DFS folders:

Step 8

Import the modified namespace definition file created in Step 2 to create the required folders (note that this operation may take some time depending on network latencies and other factors):
dfsutil root import merge reporting_namespace_modified.txt \\tailspintoys.com\DATA


Step 9

Verify the structure of the namespace:

Step 10

Test the functionality of the namespace. From a client or another server, run the "dfsutil /pktflush" command to purge cached referral data and attempt access to the DFS namespace paths. Alternately, you may reboot clients and attempt access if they do not have dfsutil.exe available.

Below is the result of accessing the "report8000" folder path via the new namespace:


Referral cache confirms the new namespace structure (red line highlighting the name of the DFS folder as "reporting\report8000"):


At this point, you should have a fully working namespace. If something is not working quite right or there are problems accessing the data, you may return to the original namespace design by deleting all DFS folders in the new domain-based namespace and importing the original namespace from the export file (or recreating the original folders by hand). At no time did we alter the standalone namespaces, so returning to the original interlinked configuration is very easy to accomplish.

Step 11

Add the necessary namespace servers to the domain-based namespace to increase fault tolerance.

Notify all previous administrators of the standalone namespace(s) that they will need to manage the domain-based namespace from this point forward. Once you confident with the new namespace, the original standalone namespace(s) may be retired at any time (assuming no systems on the network are using UNC paths directly to the standalone namespace).

Namespace already in "Windows Server 2008 mode"?

What would the process be if the domain-based namespace is already running in "Windows Server 2008 mode"? Or, you have already run through the operations once and wish to consolidate additional DFS folders? Some steps remain the same while others are skipped entirely:
Steps 1-2 (same as detailed previously to export the standalone namespace and modify the export file)
Step 3 Export the domain-based namespace for backup purposes
Step 4 Delete the DFS folder targeting the standalone namespace--the remainder of the domain-based namespace will remain unchanged
Step 8 Import the modified file created in step 2 to the domain-based namespace
Step 9-10 Verify the structure and function of the namespace

Caveats and Concerns

Ensure that no data exists in the original standalone namespace server's namespace share. Because clients are now no longer using the standalone namespace, the "reporting" path component exists as a subfolder within each domain-based namespace server's share. Furthermore, hosting data within the namespace share (domain-based or standalone) is not recommended. If this applies to you, consider moving such data into a separate folder within the new namespace and update any references to those files used by clients.

These operations should be performed during a maintenance window. The length of which is dictated by your efficiency in performing the operations and the length of time it takes to import the DFS namespace export file. Because a namespace is so easily built, modified, and deleted, you may wish to consider a "dry run" of sorts. Prior to deleting your production namespace(s), create a new test namespace (e.g. "DataTEST"), modify your standalone namespace export file (Step 2) to reference this "DataTEST" namespace and try the import. Because you are using a separate namespace, no changes will occur to any other production namespaces. You may gauge the time required for the import, and more importantly, test access to the data (\\tailspintoys.com\DataTEST\Reporting\Reporting8000 in my example). If access to the data is successful, then you will have confidence in replacing the real domain-based namespace.

Clients should not be negatively affected by the restructuring as they will discover the new hierarchy automatically. By default, clients cache namespace referrals for 5 minutes and folder referrals for 30 minutes. It is advisable to keep the standalone namespace(s) operational for at least an hour or so to accommodate transition to the new namespace, but it may remain in place for as long as you wish.

If you decommission the standalone namespace and find some clients are still using it directly, you could easily recreate the standalone namespace from our export in Step 1 while you investigate the client configurations and remove their dependency on it.

Lastly, if you are taking the time and effort to recreate the namespace for "Windows Server 2008 mode" support, you might as well consider configuring the targets of the DFS folders with DNS names (modify the export files) and also implementing DFSDnsConfig on the namespace servers.

I hope this blog eliminates some of the concerns and fears of consolidating interlinked namespaces!

Dave "King" Fisher

Circle Back to Loopback

$
0
0

Hello again!  Kim Nichols here again.  For this post, I'm taking a break from the AD LDS discussions (hold your applause until the end) and going back to a topic near and dear to my heart - Group Policy loopback processing.

Loopback processing is not a new concept to Group Policy, but it still causes confusion for even the most experienced Group Policy administrators.

This post is the first part of a two part blog series on User Group Policy Loopback processing.

  • Part 1 provides a general Group Policy refresher and introduces Loopback processing
  • Part 2 covers Troubleshooting Group Policy loopback processing

Hopefully these posts will refresh your memory and provide some tips for troubleshooting Group Policy processing when loopback is involved.

Part 1: Group Policy and Loopback processing refresher

Normal Group Policy Processing

Before we dig in too deeply, let's quickly cover normal Group Policy processing.  Thinking back to when we first learned about Group Policy processing, we learned that Group Policy
applies in the following order: 

  1. Local Group Policy
  2. Site
  3. Domain
  4. OU

You may have heard Active Directory “old timers” refer to this as LSDOU.  As a result of LSDOU, settings from GPOs linked closest (lower in OU structure) to the user take precedence over those linked farther from the user (higher in OU structure). GPO configuration options such as Block Inheritance and Enforced (previously called No Override for you old school admins) can modify processing as well, but we will keep things simple for the purposes of this example.  Normal user group policy processing applies user settings from GPOs linked to the Site, Domain, and OU containing the user object regardless of the location of the computer object in Active Directory.

Let's use a picture to clarify this.  For this example, the user is the "E" OU and the computer is in the "G" OU of the contoso.com domain.

Following normal group policy processing rules (assuming all policies apply to Authenticated Users with no WMI filters or "Block Inheritance" or "Enforced" policies), user settings of Group Policy objects apply in the following order:

  1. Local Computer Group Policy
  2. Group Policies linked to the Site
  3. Group Policies linked to the Domain (contoso.com)
  4. Group Policies linked to OU "A"
  5. Group Policies linked to OU "B"
  6. Group Policies linked to OU "E"

That’s pretty straightforward, right?  Now, let’s move on to loopback processing!

What is loopback processing?

Group Policy loopback is a computer configuration setting that enables different Group Policy user settings to apply based upon the computer from which logon occurs. 

Breaking this down a little more:

  1. It is a computer configuration setting. (Remember this for later)
  2. When enabled, user settings from GPOs applied to the computer apply to the logged on user.
  3. Loopback processing changes the list of applicable GPOs and the order in which they apply to a user. 

Why would I use loopback processing?

Administrators use loopback processing in kiosk, lab, and Terminal Server environments to provide a consistent user experience across all computers regardless of the GPOs linked to user's OU. 

Our recommendation for loopback is similar to our recommendations for WMI filters, Block Inheritance and policy Enforcement; use them sparingly.  All of these configuration options modify the default processing of policy and thus make your environment more complex to troubleshoot and maintain. As I've mentioned in other posts, whenever possible, keep your designs as simple as possible. You will save yourself countless nights/weekends/holidays in the office because will you be able to identify configuration issues more quickly and easily.

How to configure loopback processing

The loopback setting is located under Computer Configuration/Administrative Templates/System/Group Policy in the Group Policy Management Editor (GPME). 

Use the policy setting Configure user Group Policy loopback processing mode to configure loopback in Windows 8 and Windows Server 2012Earlier versions of Windows have the same policy setting under the name User Group Policy loopback processing mode.  The screenshot below is from the Windows 8 version of the GPME.

When you enable loopback processing, you also have to select the desired mode.  There are two modes for loopback processing:  Merge or Replace.

Loopback Merge vs. Replace

Prior to the start of user policy processing, the Group Policy engine checks to see if loopback is enabled and, if so, in which mode.

We'll start off with an explanation of Merge mode since it builds on our existing knowledge of user policy processing.

Loopback Merge

During loopback processing in merge mode, user GPOs process first (exactly as they do during normal policy processing), but with an additional step.  Following normal user policy processing the Group Policy engine applies user settings from GPOs linked to the computer's OU.  The result-- the user receives all user settings from GPOs applied to the user and all user settings from GPOs applied to the computer. The user settings from the computer’s GPOs win any conflicts since they apply last.

To illustrate loopback merge processing and conflict resolution, let’s use a simple chart.  The chart shows us the “winning” configuration in each of three scenarios:

  • The same user policy setting is configured in GPOs linked to the user and the computer
  • The user policy setting is only configured in a GPO linked to the user’s OU
  • The user policy setting is only configured in a GPO linked to the computer’s OU

Now, going back to our original example, loopback processing in Merge mode applies user settings from GPOs linked to the user’s OU followed by user settings from GPOs linked to the computer’s OU.

GPOs for the user in OU ”E” apply in the following order (the first part is identical to normal user policy processing from our original example):

  1. Local Group Policy
  2. Group Policy objects linked to the Site
  3. Group Policy objects linked to the Domain
  4. Group Policy objects linked to OU "A"
  5. Group Policy objects linked to OU "B"
  6. Group Policy objects linked to OU "E"
  7. Group Policy objects linked to the Site
  8. Group Policy objects linked to the Domain
  9. Group Policy objects linked to OU "A"
  10. Group Policy objects linked to OU "C"
  11. Group Policy objects linked to OU "G"

Loopback Replace

Loopback replace is much easier. During loopback processing in replace mode, the user settings applied to the computer “replace” those applied to the user.  In actuality, the Group Policy service skips the GPOs linked to the user’s OU. Group Policy effectively processes as if user object was in the OU of the computer rather than its current OU. 

The chart for loopback processing in replace mode shows that settings “1” and “2” do not apply since all user settings linked to the user’s OU are skipped when loopback is configured in replace mode.

Returning to our example of the user in the “E” OU, loopback processing in replace mode skips normal user policy processing and only applies user settings from GPOs linked to the computer.

The resulting processing order is: 

  1. Local Group Policy
  2. Group Policy objects linked to the Site
  3. Group Policy objects linked to the Domain
  4. Group Policy objects linked to OU "A"
  5. Group Policy objects linked to OU "C"
  6. Group Policy objects linked to OU "G"

Recap

  1. User Group Policy loopback processing is a computer configuration setting.
  • Loopback processing is not specific to the GPO in which it is configured. If we think back to what an Administrative Template policy is, we know it is just configuring a registry value.  In the case of the loopback policy processing setting, once this registry setting is configured, the order and scope of user group policy processing for all users logging on to the computer is modified per the mode chosen: Merge or Replace.
  • Merge mode applies GPOs linked to the user object first, followed by GPOs with user settings linked to the computer object. 
    • The order of processing determines the precedence. GPOs with users settings linked to the computer object apply last and therefore have a higher precedence than those linked to the user object.
    • Use merge mode in scenarios where you need users to receive the settings they normally receive, but you want to customize or make changes to those settings when they logon to specific computers.
  • Replace mode completely skips Group Policy objects linked in the path of the user and only applies user settings in GPOs linked in the path of the computer.  Use replace mode when you need to disregard all GPOs that are linked in the path of the user object.
  • Those are the basics of user group policy loopback processing. In my next post, I'll cover the troubleshooting process when loopback is enabled.

    Kim “Why does it say paper jam, when there is no paper jam!?” Nichols

     


    AD FS 2.0 Claims Rule Language Part 2

    $
    0
    0

    Hello, Joji Oshima here to dive deeper into the Claims Rule Language for AD FS. A while back I wrote a getting started post on the claims rule language in AD FS 2.0. If you haven't seen it, I would start with that article first as I'm going to build on the claims rule language syntax discussed in that earlier post. In this post, I'm going to cover more complex claim rules using Regular Expressions (RegEx) and how to use them to solve real world issues.

    An Introduction to Regex

    The use of RegEx allows us to search or manipulate data in many ways in order to get a desired result. Without RegEx, when we do comparisons or replacements we must look for an exact match. Most of the time this is sufficient but what if you need to search or replace based on a pattern? Say you want to search for strings that simply start with a particular word. RegEx uses pattern matching to look at a string with more precision. We can use this to control which claims are passed through, and even manipulate the data inside the claims.

    Using RegEx in searches

    Using RegEx to pattern match is accomplished by changing the standard double equals "==" to "=~" and by using special metacharacters in the condition statement. I'll outline the more commonly used ones, but there are good resources available online that go into more detail. For those of you unfamiliar with RegEx, let's first look at some common RegEx metacharacters used to build pattern templates and what the result would be when using them.

    Symbol

    Operation

    Example rule

    ^

    Match the beginning of a string

    c:[type == "http://contoso.com/role", Value =~ "^director"]

    => issue (claim = c);

     

    Pass through any role claims that start with "director"

    $

    Match the end of a string

    c:[type == "http://contoso.com/email", Value =~ "contoso.com$"]

    => issue (claim = c);

     

    Pass through any email claims that end with "contoso.com"

    |

    OR

    c:[type == "http://contoso.com/role", Value =~ "^director|^manager"]

    => issue (claim = c);

     

    Pass through any role claims that start with "director" or "manager"

    (?i)

    Not case sensitive

    c:[type == "http://contoso.com/role", Value =~ "(?i)^director"]

    => issue (claim = c);

     

    Pass through any role claims that start with "director" regardless of case

    x.*y

    "x" followed by "y"

    c:[type == "http://contoso.com/role", Value =~ "(?i)Seattle.*Manager"]

    => issue (claim = c);

     

    Pass through any role claims that contain "Seattle" followed by "Manager" regardless of case.

    +

    Match preceding character

    c:[type == "http://contoso.com/employeeId", Value =~ "^0+"]

    => issue (claim = c);

     

    Pass through any employeeId claims that contain start with at least one "0"

    *

    Match preceding character zero or more times

    Similar to above, more useful in RegExReplace() scenarios.

     

    Using RegEx in string manipulation

    RegEx pattern matching can also be used in replacement scenarios. It is similar to a "find and replace", but using pattern matching instead of exact values. To use this in a claim rule, we use the RegExReplace() function in the value section of the issuance statement.

    The RegExReplace() function accepts three parameters.

    1. The first is the string in which we are searching.
      1. We will typically want to search the value of the incoming claim (c.Value), but this could be a combination of values (c1.Value + c2.Value).
    2. The second is the RegEx pattern we are searching for in the first parameter.
    3. The third is the string value that will replace any matches found.

    Example:

    c:[type == "http://contoso.com/role"]
    => issue (Type = "http://contoso.com/role", Value = RegExReplace(c.Value, "(?i)director", "Manager");

     

    Pass through any role claims. If any of the claims contain the word "Director", RegExReplace() will change it to "Manager". For example, "Director of Finance" would pass through as "Manager of Finance".

     

    Real World Examples

    Let's look at some real world examples of regular expressions in claims rules.

    Problem 1:

    We want to add claims for all group memberships, including distribution groups.

    Solution:

    Typically, group membership is added using the wizard and selecting Token-Groups Unqualified Names and map it to the Group or Role claim. This will only pull security groups, not distribution groups, and will not contain Domain Local groups.

    We can pull from memberOf, but that will give us the entire distinguished name, which is not what we want. One way to solve this problem is to use three separate claim rules and use RegExReplace() to remove unwanted data.

    Phase 1: Pull memberOf, add to working set "phase 1"

     

    c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname", Issuer == "AD AUTHORITY"]
    => add(store = "Active Directory", types = ("http://test.com/phase1"), query = ";memberOf;{0}", param = c.Value);

    Example: "CN=Group1,OU=Users,DC=contoso,DC=com" is put into a phase 1 claim.

     

    Phase 2: Drop everything after the first comma, add to working set "phase 2"

     

    c:[Type == "http://test.com/phase1"]
    => add(Type = "http://test.com/phase2", Value = RegExReplace(c.Value, ",[^\n]*", ""));

    Example: We process the value in the phase 1 claim and put "CN=Group1" into a phase 2 claim.

     

    Digging Deeper: RegExReplace(c.Value, ",[^\n]*", "")

    • c.Value is the value of the phase 1 claim. This is what we are searching in.
    • ",[^\n]*" is the RegEx syntax used to find the first comma, plus everything after it
    • "" is the replacement value. Since there is no string, it effectively removes any matches.

     

    Phase 3: Drop CN= at the beginning, add to outgoing claim set as the standard role claim

     

    c:[Type == "http://test.com/phase2"]

    => issue(Type = "http://schemas.microsoft.com/ws/2008/06/identity/claims/role", Value = RegExReplace(c.Value, "^CN=", ""));

    Example: We process the value in phase 2 claim and put "Group1" into the role claim

    Digging Deeper: RegExReplace(c.Value, "^CN=", "")

    • c.Value is the value of the phase 1 claim. This is what we are searching in.
    • "^CN=" is the RegEx syntax used to find "CN=" at the beginning of the string.
    • "" is the replacement value. Since there is no string, it effectively removes any matches.

     

    Problem 2:

    We need to compare the values in two different claims and only allow access to the relying party if they match.

    Solution:

    In this case we can use RegExReplace(). This is not the typical use of this function, but it works in this scenario. The function will attempt to match the pattern in the first data set with the second data set. If they match, it will issue a new claim with the value of "Yes". This new claim can then be used to grant access to the relying party. That way, if these values do not match, the user will not have this claim with the value of "Yes".

     

    c1:[Type == "http://adatum.com/data1"] &&

    c2:[Type == "http://adatum.com/data2"]

    => issue(Type = "http://adatum.com/UserAuthorized", Value = RegExReplace(c1.Value, c2.Value, "Yes"));

     

    Example: If there is a data1 claim with the value of "contoso" and a data2 claim with a value of "contoso", it will issue a UserAuthorized claim with the value of "Yes". However, if data1 is "adatum" and data2 is "fabrikam", it will issue a UserAuthorized claim with the value of "adatum".

     

    Digging Deeper: RegExReplace(c1.Value, c2.Value, "Yes")

    • c1.Value is the value of the data1 claim. This is what we are searching in.
    • c2.Value is the value of the data2 claim. This is what we are searching for.
    • "Yes" is the replacement value. Only if c1.Value & c2.Value match will there be a pattern match and the string will be replaced with "Yes". Otherwise the claim will be issued with the value of the data1 claim.

     

    Problem 3:

    Let's take a second look at potential issue with our solution to problem 2. Since we are using the value of one of the claims as the RegEx syntax, we must be careful to check for certain RegEx metacharacters that would make the comparison mean something different. The backslash is used in some RegEx metacharacters so any backslashes in the values will throw off the comparison and it will always fail, even if the values match.

    Solution:

    In order to ensure that our matching claim rule works, we must sanitize the input values by removing any backslashes before doing the comparison. We can do this by taking the data that would go into the initial claims, put it in a holding attribute, and then use RegEx to strip out the backslash. The example below only shows the sanitization of data1, but it would be similar for data2.

    Phase 1: Pull attribute1, add to holding attribute "http://adatum.com/data1holder"

     

    c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname", Issuer == "AD AUTHORITY"]

    => add(store = "Active Directory", types = ("http://adatum.com/data1holder"), query = ";attribute1;{0}", param = c.Value);

    Example: The value in attribute 1 is "Contoso\John" which is placed in the data1holder claim.

     

    Phase 2: Strip the backslash from the holding claim and issue the new data1 claim

     

    c:[Type == "http://adatum.com/data1holder", Issuer == "AD AUTHORITY"]

    => issue(type = "http://adatum.com/data1", Value = RegExReplace(c.Value,"\\","");

    Example: We process the value in the data1holder claim and put "ContosoJohn" in a data1 claim

    Digging Deeper: RegExReplace(c.Value,"\\","")

    • c.Value is the value of the data1 claim. This is what we are searching in.
    • "\\" is considered a single backslash. In RegEx, using a backslash in front of a character makes it a literal backslash.
    • "" is the replacement value. Since there is no string, it effectively removes any matches.

     

    An alternate solution would be to pad each backslash in the data2 value with a second backslash. That way each backslash would be represented as a literal backslash. We could accomplish this by using RegExReplace(c.Value,"\\","\\") against a data2 input value.

     

    Problem 4:

    Employee numbers vary in length, but we need to have exactly 9 characters in the claim value. Employee numbers that are shorter than 9 characters should be padded in the front with leading zeros.

    Solution:

    In this case we can create a buffer claim, join that with the employee number claim, and then use RegEx to use the right most 9 characters of the combined string.

    Phase 1: Create a buffer claim to create the zero-padding

     

    => add(Type = "Buffer", Value = "000000000");

     

    Phase 2: Pull the employeeNumber attribute from Active Directory, place it in a holding claim

     

    c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname", Issuer == "AD AUTHORITY"]

    => add(store = "Active Directory", types = ("ENHolder"), query = ";employeeNumber;{0}", param = c.Value);

     

    Phase 3: Combine the two values, then use RegEx to remove all but the 9 right most characters.

     

    c1:[Type == "Buffer"]

    && c2:[Type == "ENHolder"]

    => issue(Type = "http://adatum.com/employeeNumber", Value = RegExReplace(c1.Value + c2.Value, ".*(?=.{9}$)", ""));

    Digging Deeper: RegExReplace(c1.Value + c2.Value, ".*(?=.{9}$)", "")

    • c1.Value + c2.Value is the employee number padded with nine zeros. This is what we are searching in.
    • ".*(?=.{9}$)" represents the last nine characters of a string. This is what we are searching for. We could replace the 9 with any number and have it represent the last "X" number of characters.
    • "" is the replacement value. Since there is no string, it effectively removes any matches.

     

    Problem 5:

    Employee numbers contain leading zeros but we need to remove those before sending them to the relying party.

    Solution:

    In this case we can pull employee number from Active Directory and place it in a holding claim, then use RegEx to use the strip out any leading zeros.

    Phase 1: Pull the employeeNumber attribute from Active Directory, place it in a holding claim

     

    c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname", Issuer == "AD AUTHORITY"]

    => add(store = "Active Directory", types = ("ENHolder"), query = ";employeeNumber;{0}", param = c.Value);

     

    Phase 2: Take the value in ENHolder and remove any leading zeros.

     

    c:[Type == "ENHolder"]

    => issue(Type = "http://adatum.com/employeeNumber", Value = RegExReplace(c.Value, "^0*", ""));

    Digging Deeper: RegExReplace(c.Value, "^0*", "")

    • c1.Value is the employee number. This is what we are searching in.
    • "^0*" finds any leading zeros. This is what we are searching for. If we only had ^0 it would only match a single leading zero. If we had 0* it would find any zeros in the string.
    • "" is the replacement value. Since there is no string, it effectively removes any matches.

     

    Conclusion

    As you can see, RegEx adds powerful functionality to the claims rule language. It has a high initial learning curve, but once you master it you will find that there are few scenarios that RegEx can't solve. I would highly recommend searching for an online RegEx syntax tester as it will make learning and testing much easier. I'll continue to expand the TechNet wiki article so I would check there for more details on the claims rule language.

    Understanding Claim Rule Language in AD FS 2.0

    AD FS 2.0: Using RegEx in the Claims Rule Language

    Regular Expression Syntax

    AD FS 2.0 Claims Rule Language Primer

    Until next time,

    Joji "Claim Jumper" Oshima

    We're back. Did you miss us?

    $
    0
    0

    Hey all, David here.  Now that we’ve broken the silence, we here on the DS team felt that we owed you, dear readers, an explanation of some sort.  Plus, we wanted to talk about the blog itself, some changes happening for us, and what you should hopefully be able to expect moving forward.

     

    So, what had happened was….

    As most of you know, a few months ago our editor-in-chief and the butt of many jokes here on the DS support team moved to a new position.  We have it on good authority that he is thoroughly terrorizing many of our developers in Redmond with scary words like “documentation”, “supportability”, and other Chicago-style aphorisms which are best not repeated in print.

    Unfortunately for us and for this blog, that left us with a little bit of a hole in the editing team!  The folks left behind might have been superheroes, but the problem with being a superhero is that you get called on to go save the world (or a customer with a crisis) all the time, and that doesn’t leave much time for picking up your cape from the dry cleaners, let alone keeping up with editing blog submissions, doing mail sacks, and generally keeping the blog going.

    At the same time, we had a bit of a reorganization internally.  Where we were formerly one team within support, we are now two teams – DS (Directory Services) and ID (Identity).  Why the distinction?  Well, you may have heard about this Cloud thing…. But that’s a story best told by technical blog posts, really.  For now, let’s just say the scope of some of what we do expanded last year from “a lot of people use it” to “internet scale”.  Pretty scary when you think about it.

    Just to make things even more confusing, about a month ago we were officially reunited with our long-lost (and slightly insane, but in a good way) brethren in the field engineering organization.  That’s right, our two orgs have been glommed[1] together into one giant concentration of support engineering superpower.  While it’s opening up some really cool stuff that we have always wanted to do but couldn’t before, it’s still the equivalent of waking up one day and finding out that all of those cousins you see every few years at family reunions are coming to live with you.  In your house.  Oh, and they’re bringing their dog.

    Either way, the net effect of all this massive change was that we sort of got quiet for a few months.  It wasn’t you, honest.  It was us.

     

    What to Expect

    It’s important to us that we keep this blog current with detailed, pertinent technical info that helps you resolve issues that you might encounter, or even just helps you understand how our parts of Windows work a bit better.  So, we’re picking that torch back up and we’ll be trying to get several good technical posts up each month for you.  You may also see some shorter posts moving forward.  The idea is to break up the giant articles and try to get some smaller, useful-to-know things out there every so often.  Internally, we’re calling the little posts “DS Quickies” but no promises on whether we’ll actually give you that as a category to search on.  Yes, we’re cruel like that.  You’ll also see the return of the mail sack at some point in the near future, and most importantly you’re going to see some new names showing up as writers.  We’ve put out the call, and we’re planning to bring you blog posts written not just by our folks in the Americas, but also in Europe and Asia.  You can probably also expect some guest posts from our kin in the PFE organization, when they have something specific to what we do that they want to talk about.

    At the same time, we’re keen on keeping the stuff that makes our blog useful and fun.  So you can continue to expect technical depth, detailed analysis, plain-English explanations, and occasional irreverent, snarky humor.  We’re not here to tell you why you should buy Windows clients or servers (or phones, or tablets) – we have plenty of marketing websites that do that better than we ever could.  Instead, we’re here to help you understand how Windows works and how to fix problems when they occur.  Although we do reserve the right to post blatant wackiness or fun things every so often too.  Look, we don’t get out much, ok?  This is our outlet.  Just go with us on this.

    Finally, you’re going to see me personally posting a bit more, since I’ve taken over as the primary editor for the site.  I know - I tried to warn them what would happen, but they still gave me the job all the same.  Jokes aside, I feel like it’s important that our blog isn’t just an encyclopedia of awesome technical troubleshooting, but also that it showcases the fact that we’re real people doing our best to make the IT world a better place, as sappy as that sounds. (Except for David Fisher– I’m convinced he’s really a robot).  I have a different writing style than Ned and Jonathan, and a different sense of humor, but I promise to contain myself as much as possible.  :-)

    Sound good?  We hope so.  We’re going to go off and write some more technical stuff now – in fact:  On deck for next week:  A followup to Kim’s blog on Loopback Policy Processing.

    We wanted to leave you with a funny video that’s safe for work to help kick off the weekend, but alas our bing fu was weak today.  Got a good one to share?  Feel free to link it for us in the comments!


    [1]
    “Glom” is a technical term, by the way, not a managerial one.  Needless to say, hijinks are continuing to
    ensue.

     

    -- David "Capes are cool" Beach

    Back to the Loopback: Troubleshooting Group Policy loopback processing, Part 2

    $
    0
    0

    Welcome back!  Kim Nichols here once again with the much anticipated Part 2 to Circle Back to Loopback.  Thanks for all the comments and feedback on Part 1.  For those of you joining us a little late in the game, you'll want to check out Part 1: Circle Back to Loopback before reading further.

    In my first post, the goal was to keep it simple.  Now, we're going to go into a little more detail to help you identify and troubleshoot Group Policy issues related to loopback processing.  If you follow these steps, you should be able to apply what you've learned to any loopback scenario that you may run into (assuming that the environment is healthy and there are no other policy infrastructure issues).

    To troubleshoot loopback processing you need to know and understand:

    1. The status of the loopback configuration.  Is it enabled, and if so, in which mode?
    2. The desired state configuration vs. the actual state configuration of applied policy
    3. Which settings from which GPOs are "supposed" to be applied?
    4. To whom should the settings apply or not apply?
      1. The security filtering requirements when using loopback
      2. Is the loopback setting configured in the same GPO or a separate GPO from the user settings?
      3. Are the user settings configured in a GPO with computer settings?

    What you need to know:

    Know if loopback is enabled and in which mode

    The first step in troubleshooting loopback is to know that it is enabled.  It seems pretty obvious, I know, but often loopback is enabled by one administrator in one GPO without understanding that the setting will impact all computers that apply the GPO.  This gets back to Part 1 of this blog . . . loopback processing is a computer configuration setting. 

    Take a deep cleansing breath and say it again . . . Loopback processing is a computer configuration setting.  :-)

    Everyone feels better now, right?  The loopback setting configures a registry value on the computer to which it applies.  The Group Policy engine reads this value and changes how it builds the list of applicable user policies based on the selected loopback mode.

    The easiest way to know if loopback might be causing troubles with your policy processing is to collect a GPResult /h from the computer.  Since loopback is a computer configuration setting, you will need to run GPResult from an administrative command prompt.

     

     

    The good news is that the GPResult output will show you the winning GPO with loopback enabled.  Unfortunately, it does not list all GPOs with loopback configured, just the one with the highest precedence. 

    If your OU structure separates users from computers, the GPResult output can also help you find GPOs containing user settings that are linked to computer OUs.  Look for GPOs linked to computer OUs under the Applied GPOs section of the User Details of the GPResult output. 

    Below is an example of the output of the GPResult /h command from a Windows Server 2012 member server.  The layout of the report has changed slightly going from Windows Server 2008 to Windows Server 2012, so your results may look different, but the same information is provided by previous versions of the tool.  Notice that the link location includes the Computers OU, but we are in the User Details section of the report.  This is a good indication that we have loopback enabled in a GPO linked in the path of the computer account. 

     

       
    Understand the desired state vs. the actual state

    This one also sounds obvious, but in order to troubleshoot you have to know and understand exactly which settings you are expecting to apply to the user.  This is harder than it sounds.  In a lab environment where you control everything, it's pretty easy to keep track of desired configuration.  However, in a production environment with potentially multiple delegated GPO admins, this is much more difficult. 

    GPResult gives us the actual state, but if you don't know the desired state at the setting level, then you can't reasonably determine if loopback is configured correctly (meaning you have WMI filters and/or security filtering set properly to achieve your desired configuration). 

         
    Review security filtering on GPOs

    Once you determine which GPOs or which settings are not applying as expected, then you have a place to start your investigation. 

    In our experience here in support, loopback processing issues usually come down to incorrect security filtering, so rule that out first.

    This is where things get tricky . . . If you are configuring custom security filtering on your GPOs, loopback can get confusing quickly.  As a general rule, you should try to keep your WMI and security filtering as simple as possible - but ESPECIALLY when loopback is involved.  You may want to consider temporarily unlinking any WMI filters for troubleshooting purposes.  The goal is to ensure the policies you are expecting to apply are actually applying.  Once you determine this, then you can add your WMI filters back into the equation.  A test environment is the best place to do this type of investigation.

    Setting up security filtering correctly depends on how you architect your policies:

    1. Did you enable loopback in its own GPO or in a GPO with other computer or user settings?
    2. Are you combining user settings and computer settings into the same GPO(s) linked to the computer’sOU?

    The thing to keep in mind is that if you have what I would call "mixed use" GPOs, then your security filtering has to accommodate all of those uses.  This is only a problem if you remove Authenticated Users from the security filter on the GPO containing the user settings.  If you remove Authenticated Users from the security filter, then you have to think through which settings you are configuring, in which GPOs, to be applied to which computers and users, in which loopback mode....

    Ouch.  That's LOTS of thinking!

    So, unless that sounds like loads of fun to you, it’s best to keep WMI and security filtering as simple as possible.  I know that you can’t always leave Authenticated Users in place, but try to think of alternative solutions before removing it when loopback is involved. 

    Now to the part that everyone always asks about once they realize their current filter is wrong – How the heck should I configure the security filter?!

     

    Security filtering requirements:

    1. The computer account must have READandAPPLY permissions to the GPO that contains the loopback configuration setting.
    2. If you are configuring user settings in the same GPO as computer settings, then the user and computer accounts will both need READandAPPLY permissions to the GPO since there are portions of the GPO that are applicable to both.
    3. If the user settings are in a separate GPO from the loopback configuration setting (#1 above) and any other computer settings (#2 above), then the GPO containing the user settings requires the following permissions:  

     

    Merge mode requirements (Vista+):

    User account:

    READ and APPLY (these are the default
      permissions that are applied when you add users to the Security Filtering
      section of the GPO  on the Scope tab in
      GPMC)

    Computer account:

    Minimum of READ permission

     

    Replace mode requirements:

    User account:

    READ and APPLY (these are the default
      permissions that are applied when you add users to the Security Filtering
      section of the GPO  on the Scope tab in
      GPMC)

    Computer account:

    No permissions are required

      

     

    Tools for Troubleshooting

    The number one tool for troubleshooting loopback processing is your GPRESULT output and a solid understanding of the security filtering requirements for loopback processing in your GPO architecture (see above).

    The GPRESULT will tell you which GPOs applied to the user.  If a specific GPO failed to apply, then you need to review the security filtering on that GPO and verify:

    • The user has READ and APPLYpermissions
    • Depending on your GPO architecture, the computer may need READor it may need READ and APPLY if you combined computer and user settings in the same GPO.

    The same strategy applies if you have mysterious policy settings applying after configuring loopback and you are not sure know why.  Use your GPRESULT output to identify which GPO(s) the policy settings are coming from and then review the security filtering of those GPOs. 

    The Group Policy Operational logs from the computer will also tell you which GPOs were discovered and applied, but this is the same information that you will get
    from the GPRESULT.

    Recommendations for using loopback

    After working my fair share of loopback-related cases, I've collected a list of recommendations for using loopback.  This isn’t an official list of "best practices", but rather just some personal recommendations that may make your life easier.  ENJOY!

    I'll start with what is fast becoming my mantra: Keep it Simple.  Pretty much all of my recommendations can come back to this point.

     

    1. Don't use loopback  :-) 

    OK, I know, not realistic.  How about this . . . Don't use loopback unless you absolutely have to. 

    • I say this not because there is something evil about loopback, but rather because loopback complicates how you think about Group Policy processing.  Loopback tends to be configured and then forgotten about until you start seeing unexpected results. 

    2. Use a separate GPO for the loopback setting; ONLY include the loopback setting in this GPO, and do not include the user settings.  Name it Loopback-Merge or Loopback-Replace depending on the mode.

    • This makes loopback very easy to identify in both the GPMC and in your GPRESULT output.  In the GPMC, you will be able to see where the GPO is linked and the mode without needing to view the settings or details of any GPOS.  Your GPRESULT output will clearly list the loopback policy in the list of applied policies and you will also know the loopback mode, without digging into the report. Using a separate policy also allows you to manage the security of the loopback GPO separately from the security on the GPOs containing the user settings.

    3. Avoid custom security filtering if you can help it. 

    • Loopback works without a hitch if you leave Authenticated Users in the security filtering of the GPO.  Removing Authenticated Users results in a lot more work for you in the long run and makes troubleshooting undesired behaviors much more complicated.

    4. Don't enable loopback in a GPO linked at the domain level!

    • This will impact your Domain Controllers.  I wouldn't be including this warning, if I hadn't worked several cases where loopback had been inadvertently applied to Domain Controllers.  Again, there isn’t anything inherently wrong with applying loopback on Domain Controllers.  It is bad, however, when loopback unexpectedly applies to Domain Controllers.
    • If you absolutely MUST enable loopback in a GPO linked at the domain level, then block inheritance on your Domain Controllers OU.  If you do this, you will need to link the Default Domain Policy back to the Domain Controllers OU making sure to have the precedence of the Default Domain Controllers policy higher (lower number) than the Domain Policy.
    • In general, be careful with all policies linked at the at the domain level.  Yes, it may be "simpler" to manage most policy at the domain level, but it can lead
      to lazy administration practices and make it very easy to forget about the impact of seemingly minor policy changes on your DCs.
    • Even if you are editing the security filtering to specific computers, it is still dangerous to have the loopback setting in a GPO linked at the domain level.  What if someone mistakenly modifies the security filtering to "fix" some other issue.
      • TEST, TEST, TEST!!!  It’s even more important to test when you are modifying GPOs that impact domain controllers.  Making a change at the domain level that negatively impacts a domain controller can be career altering.  Even if you have to set up a test domain in virtual machines on your own workstation, find a way to test.

    5. Always test in a representative environment prior to deploying loopback in production.

    • Try to duplicate your production GPOs as closely as possible.  Export/Import is a great way to do this.
    • Enabling loopback almost always surfaces some settings that you weren't aware of.  Unless you are diligent about disabling unused portions of GPOs and you perform periodic audits of actual configuration versus documented desired state configuration, there will typically be a few settings that are outside of your desired configuration. 
    • Duplicating your production policies in a test environment means you will find these anomalies before you make the changes in production.

     

    That’s all folks!  You are now ready to go forth and conquer all of those loopback policies!

     

    Kim “1.21 Gigawatts!!” Nichols

    Two lines that can save your AD from a crisis

    $
    0
    0

    Editor's note:  This is the first of very likely many "DS Quickies".  "Quickies" are shorter technical blog posts that relate hopefully-useful information and concepts for you to use in administering your networks.  We thought about doing these on Twitter or something, but sadly we're still too technical to be bound by a 140-character limit :-)

    For those of you who really look forward to the larger articles to help explain different facets of Windows, Active Directory, or troubleshooting, don't worry - there will still be plenty of those too. 

     

    Hi! This is Gonzalo writing to you from the support team for Latin America.

    Recently we got a call from a customer, where one of the administrators accidentally executed a script that was intended to delete local users… on a domain controller. The result was that all domain users were deleted from the environment in just a couple of seconds. The good thing was that this customer had previously enabled Recycle Bin, but it still took a couple of hours to recover all users as this was a very large environment. This type of issue is something that comes up all the time, and it’s always painful for the customers who run into it. I have worked many cases where the lack of proper protection to objects caused a lot of issues for customer environments and even in some cases ended up costing administrators their jobs, all because of an accidental click. But, how can we avoid this?

    If you take a look at the properties of any object in Active Directory, you will notice a checkbox named “Protect object from accidental deletion” under Object tab. When this enabled, permissions are set to deny
    deletion of this object to Everyone.


     

    With the exception of Organizational Units, this setting is not enabled by default on all objects in Active Directory.  When creating an object, it needs to be set manually. The challenge is how to easily enable this on thousands of objects.

    ANSWER!  Powershell!

    Two simple PowerShell commands will enable you to set accidental deletion protection on all objects in your Active Directory. The first command will set this on any users or computers (or any object with value user on the ObjectClass attribute). The second command will set this on any Organizational Unit where the setting is not already enabled.

     

    Get-ADObject -filter {(ObjectClass -eq "user")} | Set-ADObject -ProtectedFromAccidentalDeletion:$true

    Get-ADOrganizationalUnit -filter * | Set-ADObject -ProtectedFromAccidentalDeletion:$true

     

    Once you run these commands, your environment will be protected against accidental (or intentional) deletion of objects.

    Note: As a proof of concept, I tested the script that my customer used with the accidental deletion protection enabled and none of the objects in my Active Directory environment were deleted.

     

    Gonzalo “keep your job” Reyna

    Windows Server 2012 R2 - Preview available for download

    $
    0
    0

    Just in case you missed the announcement, the preview build of Windows Server 2012 R2 is now available for download.  If you want to see the latest and greatest, head on over there and take a gander at the new features.  All of us here in support have skin in this game, but Directory Services (us) has several new features that we'll be talking about over the coming months.  Including a lot of this stuff named in the announcement:

    "Empowering employee productivity– Windows Server Work Folders, Web App Proxy, improvements to Active Directory Federation Services and other technologies will help companies give their employees consistent access to company resources on the device of their choice."

    Obviously this is still a beta release.  Things can change before RTM.  Don't go doing anything silly like deploying this in production - it's officially unsupported at this stage, and for testing purposes only.  But with all that in mind, give it a whirl, and hit the TechNet forums to provide feedback and ask questions.  You will also want to keep an eye on some of our server and tools blogs in the near future.  For your convenience, a bunch of those are linked in the bar up top for you.

    Happy previewing!

    --David "Town Crier" Beach

    Viewing all 130 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>