Quantcast
Channel: Ask the Directory Services Team
Viewing all 130 articles
Browse latest View live

Back to the Loopback: Troubleshooting Group Policy loopback processing, Part 2

$
0
0

Welcome back!  Kim Nichols here once again with the much anticipated Part 2 to Circle Back to Loopback.  Thanks for all the comments and feedback on Part 1.  For those of you joining us a little late in the game, you'll want to check out Part 1: Circle Back to Loopback before reading further.

In my first post, the goal was to keep it simple.  Now, we're going to go into a little more detail to help you identify and troubleshoot Group Policy issues related to loopback processing.  If you follow these steps, you should be able to apply what you've learned to any loopback scenario that you may run into (assuming that the environment is healthy and there are no other policy infrastructure issues).

To troubleshoot loopback processing you need to know and understand:

  1. The status of the loopback configuration.  Is it enabled, and if so, in which mode?
  2. The desired state configuration vs. the actual state configuration of applied policy
  3. Which settings from which GPOs are "supposed" to be applied?
  4. To whom should the settings apply or not apply?
    1. The security filtering requirements when using loopback
    2. Is the loopback setting configured in the same GPO or a separate GPO from the user settings?
    3. Are the user settings configured in a GPO with computer settings?

What you need to know:

Know if loopback is enabled and in which mode

The first step in troubleshooting loopback is to know that it is enabled.  It seems pretty obvious, I know, but often loopback is enabled by one administrator in one GPO without understanding that the setting will impact all computers that apply the GPO.  This gets back to Part 1 of this blog . . . loopback processing is a computer configuration setting. 

Take a deep cleansing breath and say it again . . . Loopback processing is a computer configuration setting.  :-)

Everyone feels better now, right?  The loopback setting configures a registry value on the computer to which it applies.  The Group Policy engine reads this value and changes how it builds the list of applicable user policies based on the selected loopback mode.

The easiest way to know if loopback might be causing troubles with your policy processing is to collect a GPResult /h from the computer.  Since loopback is a computer configuration setting, you will need to run GPResult from an administrative command prompt.

 

 

The good news is that the GPResult output will show you the winning GPO with loopback enabled.  Unfortunately, it does not list all GPOs with loopback configured, just the one with the highest precedence. 

If your OU structure separates users from computers, the GPResult output can also help you find GPOs containing user settings that are linked to computer OUs.  Look for GPOs linked to computer OUs under the Applied GPOs section of the User Details of the GPResult output. 

Below is an example of the output of the GPResult /h command from a Windows Server 2012 member server.  The layout of the report has changed slightly going from Windows Server 2008 to Windows Server 2012, so your results may look different, but the same information is provided by previous versions of the tool.  Notice that the link location includes the Computers OU, but we are in the User Details section of the report.  This is a good indication that we have loopback enabled in a GPO linked in the path of the computer account. 

 

   
Understand the desired state vs. the actual state

This one also sounds obvious, but in order to troubleshoot you have to know and understand exactly which settings you are expecting to apply to the user.  This is harder than it sounds.  In a lab environment where you control everything, it's pretty easy to keep track of desired configuration.  However, in a production environment with potentially multiple delegated GPO admins, this is much more difficult. 

GPResult gives us the actual state, but if you don't know the desired state at the setting level, then you can't reasonably determine if loopback is configured correctly (meaning you have WMI filters and/or security filtering set properly to achieve your desired configuration). 

     
Review security filtering on GPOs

Once you determine which GPOs or which settings are not applying as expected, then you have a place to start your investigation. 

In our experience here in support, loopback processing issues usually come down to incorrect security filtering, so rule that out first.

This is where things get tricky . . . If you are configuring custom security filtering on your GPOs, loopback can get confusing quickly.  As a general rule, you should try to keep your WMI and security filtering as simple as possible - but ESPECIALLY when loopback is involved.  You may want to consider temporarily unlinking any WMI filters for troubleshooting purposes.  The goal is to ensure the policies you are expecting to apply are actually applying.  Once you determine this, then you can add your WMI filters back into the equation.  A test environment is the best place to do this type of investigation.

Setting up security filtering correctly depends on how you architect your policies:

  1. Did you enable loopback in its own GPO or in a GPO with other computer or user settings?
  2. Are you combining user settings and computer settings into the same GPO(s) linked to the computer’sOU?

The thing to keep in mind is that if you have what I would call "mixed use" GPOs, then your security filtering has to accommodate all of those uses.  This is only a problem if you remove Authenticated Users from the security filter on the GPO containing the user settings.  If you remove Authenticated Users from the security filter, then you have to think through which settings you are configuring, in which GPOs, to be applied to which computers and users, in which loopback mode....

Ouch.  That's LOTS of thinking!

So, unless that sounds like loads of fun to you, it’s best to keep WMI and security filtering as simple as possible.  I know that you can’t always leave Authenticated Users in place, but try to think of alternative solutions before removing it when loopback is involved. 

Now to the part that everyone always asks about once they realize their current filter is wrong – How the heck should I configure the security filter?!

 

Security filtering requirements:

  1. The computer account must have READandAPPLY permissions to the GPO that contains the loopback configuration setting.
  2. If you are configuring user settings in the same GPO as computer settings, then the user and computer accounts will both need READandAPPLY permissions to the GPO since there are portions of the GPO that are applicable to both.
  3. If the user settings are in a separate GPO from the loopback configuration setting (#1 above) and any other computer settings (#2 above), then the GPO containing the user settings requires the following permissions:  

 

Merge mode requirements (Vista+):

User account:

READ and APPLY (these are the default
  permissions that are applied when you add users to the Security Filtering
  section of the GPO  on the Scope tab in
  GPMC)

Computer account:

Minimum of READ permission

 

Replace mode requirements:

User account:

READ and APPLY (these are the default
  permissions that are applied when you add users to the Security Filtering
  section of the GPO  on the Scope tab in
  GPMC)

Computer account:

No permissions are required

  

 

Tools for Troubleshooting

The number one tool for troubleshooting loopback processing is your GPRESULT output and a solid understanding of the security filtering requirements for loopback processing in your GPO architecture (see above).

The GPRESULT will tell you which GPOs applied to the user.  If a specific GPO failed to apply, then you need to review the security filtering on that GPO and verify:

  • The user has READ and APPLYpermissions
  • Depending on your GPO architecture, the computer may need READor it may need READ and APPLY if you combined computer and user settings in the same GPO.

The same strategy applies if you have mysterious policy settings applying after configuring loopback and you are not sure know why.  Use your GPRESULT output to identify which GPO(s) the policy settings are coming from and then review the security filtering of those GPOs. 

The Group Policy Operational logs from the computer will also tell you which GPOs were discovered and applied, but this is the same information that you will get
from the GPRESULT.

Recommendations for using loopback

After working my fair share of loopback-related cases, I've collected a list of recommendations for using loopback.  This isn’t an official list of "best practices", but rather just some personal recommendations that may make your life easier.  ENJOY!

I'll start with what is fast becoming my mantra: Keep it Simple.  Pretty much all of my recommendations can come back to this point.

 

1. Don't use loopback  :-) 

OK, I know, not realistic.  How about this . . . Don't use loopback unless you absolutely have to. 

  • I say this not because there is something evil about loopback, but rather because loopback complicates how you think about Group Policy processing.  Loopback tends to be configured and then forgotten about until you start seeing unexpected results. 

2. Use a separate GPO for the loopback setting; ONLY include the loopback setting in this GPO, and do not include the user settings.  Name it Loopback-Merge or Loopback-Replace depending on the mode.

  • This makes loopback very easy to identify in both the GPMC and in your GPRESULT output.  In the GPMC, you will be able to see where the GPO is linked and the mode without needing to view the settings or details of any GPOS.  Your GPRESULT output will clearly list the loopback policy in the list of applied policies and you will also know the loopback mode, without digging into the report. Using a separate policy also allows you to manage the security of the loopback GPO separately from the security on the GPOs containing the user settings.

3. Avoid custom security filtering if you can help it. 

  • Loopback works without a hitch if you leave Authenticated Users in the security filtering of the GPO.  Removing Authenticated Users results in a lot more work for you in the long run and makes troubleshooting undesired behaviors much more complicated.

4. Don't enable loopback in a GPO linked at the domain level!

  • This will impact your Domain Controllers.  I wouldn't be including this warning, if I hadn't worked several cases where loopback had been inadvertently applied to Domain Controllers.  Again, there isn’t anything inherently wrong with applying loopback on Domain Controllers.  It is bad, however, when loopback unexpectedly applies to Domain Controllers.
  • If you absolutely MUST enable loopback in a GPO linked at the domain level, then block inheritance on your Domain Controllers OU.  If you do this, you will need to link the Default Domain Policy back to the Domain Controllers OU making sure to have the precedence of the Default Domain Controllers policy higher (lower number) than the Domain Policy.
  • In general, be careful with all policies linked at the at the domain level.  Yes, it may be "simpler" to manage most policy at the domain level, but it can lead
    to lazy administration practices and make it very easy to forget about the impact of seemingly minor policy changes on your DCs.
  • Even if you are editing the security filtering to specific computers, it is still dangerous to have the loopback setting in a GPO linked at the domain level.  What if someone mistakenly modifies the security filtering to "fix" some other issue.
    • TEST, TEST, TEST!!!  It’s even more important to test when you are modifying GPOs that impact domain controllers.  Making a change at the domain level that negatively impacts a domain controller can be career altering.  Even if you have to set up a test domain in virtual machines on your own workstation, find a way to test.

5. Always test in a representative environment prior to deploying loopback in production.

  • Try to duplicate your production GPOs as closely as possible.  Export/Import is a great way to do this.
  • Enabling loopback almost always surfaces some settings that you weren't aware of.  Unless you are diligent about disabling unused portions of GPOs and you perform periodic audits of actual configuration versus documented desired state configuration, there will typically be a few settings that are outside of your desired configuration. 
  • Duplicating your production policies in a test environment means you will find these anomalies before you make the changes in production.

 

That’s all folks!  You are now ready to go forth and conquer all of those loopback policies!

 

Kim “1.21 Gigawatts!!” Nichols


Two lines that can save your AD from a crisis

$
0
0

Editor's note:  This is the first of very likely many "DS Quickies".  "Quickies" are shorter technical blog posts that relate hopefully-useful information and concepts for you to use in administering your networks.  We thought about doing these on Twitter or something, but sadly we're still too technical to be bound by a 140-character limit :-)

For those of you who really look forward to the larger articles to help explain different facets of Windows, Active Directory, or troubleshooting, don't worry - there will still be plenty of those too. 

 

Hi! This is Gonzalo writing to you from the support team for Latin America.

Recently we got a call from a customer, where one of the administrators accidentally executed a script that was intended to delete local users… on a domain controller. The result was that all domain users were deleted from the environment in just a couple of seconds. The good thing was that this customer had previously enabled Recycle Bin, but it still took a couple of hours to recover all users as this was a very large environment. This type of issue is something that comes up all the time, and it’s always painful for the customers who run into it. I have worked many cases where the lack of proper protection to objects caused a lot of issues for customer environments and even in some cases ended up costing administrators their jobs, all because of an accidental click. But, how can we avoid this?

If you take a look at the properties of any object in Active Directory, you will notice a checkbox named “Protect object from accidental deletion” under Object tab. When this enabled, permissions are set to deny
deletion of this object to Everyone.


 

With the exception of Organizational Units, this setting is not enabled by default on all objects in Active Directory.  When creating an object, it needs to be set manually. The challenge is how to easily enable this on thousands of objects.

ANSWER!  Powershell!

Two simple PowerShell commands will enable you to set accidental deletion protection on all objects in your Active Directory. The first command will set this on any users or computers (or any object with value user on the ObjectClass attribute). The second command will set this on any Organizational Unit where the setting is not already enabled.

 

Get-ADObject -filter {(ObjectClass -eq "user")} | Set-ADObject -ProtectedFromAccidentalDeletion:$true

Get-ADOrganizationalUnit -filter * | Set-ADObject -ProtectedFromAccidentalDeletion:$true

 

Once you run these commands, your environment will be protected against accidental (or intentional) deletion of objects.

Note: As a proof of concept, I tested the script that my customer used with the accidental deletion protection enabled and none of the objects in my Active Directory environment were deleted.

 

Gonzalo “keep your job” Reyna

Monthly Mail Sack: Yes, I Finally Admit It Edition

$
0
0

Heya folks, Ned here again. Rather than continue the lie that this series comes out every Friday like it once did, I am taking the corporate approach and rebranding the mail sack. Maybe we’ll have the occasional Collector’s Edition versions.

This week month, I answer your questions on:

Let’s incentivize our value props!

Question

Everywhere I look, I find documentation saying that when Kerberos skew exceeds five minutes in a Windows forest, the sky falls and the four horsemen arrive.

I recall years ago at a Microsoft summit when I brought that time skew issue up and the developer I was speaking to said no, that isn't the case anymore, you can log on fine. I recently re-tested that and sure enough, no amount of skew on my member machine against a DC prevents me from authenticating.

Looking at the network trace I see the KRB_APP_ERR_SKEW response for the AS REQ which is followed by breaking down of the kerb connection which is immediately followed by reestablishing the kerb connection again and another AS REQ that works just fine and is responded to with a proper AS REP.

My first question is.... Am I missing something?

My second question is... While I realize that third party Kerb clients may or may not have this functionality, are there instances where it doesn't work within Windows Kerb clients? Or could it affect other scenarios like AD replication?

Answer

Nope, you’re not missing anything. If I try to logon from my highly-skewed Windows client and apply group policy, the network traffic will look approximately like:

Frame

Source

Destination

Packet Data Summary

1

Client

DC

AS Request Cname: client$ Realm: CONTOSO.COM Sname:

2

DC

Client

KRB_ERROR - KRB_AP_ERR_SKEW (37)

3

Client

DC

AS Request Cname: client$ Realm: CONTOSO.COM Sname: krbtgt/CONTOSO.COM

4

DC

Client

AS Response Ticket[Realm: CONTOSO.COM, Sname: krbtgt/CONTOSO.COM]

5

Client

DC

TGS Request Realm: CONTOSO.COM Sname: cifs/DC.CONTOSO.COM

6

DC

Client

KRB_ERROR - KRB_AP_ERR_SKEW (37)

7

Client

DC

TGS Request Realm: CONTOSO.COM Sname: cifs/DC.CONTOSO.COM

8

DC

Client

TGS Response Cname: client$

When your client sends a time stamp that is outside the range of Maximum tolerance for computer clock synchronization, the DC comes back with that KRB_APP_ERR_SKEW error – but it also contains an encrypted copy of his own time stamp. The client uses that to create a valid time stamp to send back. This doesn’t decrease security in the design because we are still using encryption and requiring knowledge of the secrets,  plus there is still only – by default – 5 minutes for an attacker to break the encryption and start impersonating the principal or attempt replay attacks. Which is not feasible with even XP’s 11 year old cipher suites, much less Windows 8’s.

This isn’t some Microsoft wackiness either – RFC 4430 states:

If the server clock and the client clock are off by more than the policy-determined clock skew limit (usually 5 minutes), the server MUST return a KRB_AP_ERR_SKEW.The optional client's time in the KRB-ERROR SHOULD be filled out.

If the server protects the error by adding the Cksum field and returning the correct client's time, the client SHOULD compute the difference (in seconds) between the two clocks based upon the client and server time contained in the KRB-ERROR message.

The client SHOULD store this clock difference and use it to adjust its clock in subsequent messages. If the error is not protected, the client MUST NOT use the difference to adjust subsequent messages, because doing so would allow an attacker to construct authenticators that can be used to mount replay attacks.

Hmmm… SHOULD. Here’s where things get more muddy and I address your second question. No one actually has to honor this skew correction:

  1. Windows 2000 didn’t always honor it. But it’s dead as fried chicken, so who cares.
  2. Not all third parties honor it.
  3. Windows XP and Windows Server 2003 do honor it, but there were bugs that sometimes prevented it (long gone, AFAIK). Later Windows OSes do of course and I know of no regressions.
  4. If the clock of the client computer is faster than the clock time of the domain controller plus the lifetime of Kerberos ticket (10 hours, by default), the Kerberos ticket is invalid and auth fails.
  5. Some non-client logon application scenarios enforce the strict skew tolerance and don’t care to adjust, because of other time needs tied to Kerberos and security. AD replication is one of them – event LSASRV 40960 with extended error 0xc000133 comes to mind in this scenario, as does trying to run DSSite.msc “replicate now” and getting back error 0x576 “There is a time and / or date difference between the client and the server.” I have recent case evidence of Dcpromo enforcing the 5 minutes with Kerberos strictly, even in Windows Server 2008 R2, although I have not personally tried to validate it. I’ve seen it with appliances and firewalls too.

With that RFC’s indecisiveness and the other caveats, we beat the “just make sure it’s no more than 5 minutes” drum in all of our docs and here on AskDS. It’s too much trouble to get into what-ifs.

We have a KB tucked away on this here but it is nearly un-findable.

Awesome question.

Question

I’ve found articles on using Windows PowerShell to locate all domain controllers in a domain, and even all GCs in a forest, but I can’t find one to return all DCs in a forest. Get-AdDomainController seems to be limited to a single domain. Is this possible?

Answer

It’s trickier than you might think. I can think of two ways to do this; perhaps commenters will have others. The first is to get the domains in the forest, then find one domain controller in each domain and ask it to list all the domain controllers in its own domain. This gets around the limitation of Get-AdDomainController for a single domain (single line wrapped).

(get-adforest).domains | foreach {Get-ADDomainController -discover -DomainName $_} | foreach {Get-addomaincontroller -filter * -server $_} | ft hostname

The second is to go directly to the the native  .NET AD DS forest class to return the domains for the forest, then loop through each one returning the domain controllers (single lined wrapped).

[system.directoryservices.activedirectory.Forest]::GetCurrentForest().domains | foreach {$_.DomainControllers} | foreach {$_.hostname}

This also lead to updated TechNet content. Good work, Internet!

Question

Hi, I've been reading up on RID issuance management and the new RID Master changes in Windows Server 2012. They still leave me with a question, however: why are RIDs even needed in a SID? Can't the SID be incremented on it's own? The domain identifier seems to be an adequately large number, larger than the 30-bit RID anyway. I know there's a good reason for it, but I just can't find any material that says why there are separate domain ID and relative ID in a SID.

Answer

The main reason was a SID needs the domain identifier portion to have a contextual meaning. By using the same domain identifier on all security principals from that domain, we can quickly and easily identify SIDs issued from one domain or another within a forest. This is useful for a variety of security reasons under the hood.

That also allows us a useful technique called “SID compression”, where we want to save space in a user’s security data in memory. For example, let’s say I am a member of five domain security groups:

DOMAINSID-RID1
DOMAINSID-RID2
DOMAINSID-RID3
DOMAINSID-RID4
DOMAINSID-RID5

With a constant domain identifier portion on all five, I now have the option to use one domain SID portion on all the other associated ones, without using all the memory up with duplicate data:

DOMAINSID-RID1
“-RID2
“-RID3
“-RID4
“-RID5

The consistent domain portion also fixes a big problem: if all of the SIDs held no special domain context, keeping track of where they were issued from would be a much bigger task. We’d need some sort of big master database (“The SID Master”?) in an environment that understood all forests and domains and local computers and everything. Otherwise we’d have a higher chance of duplication through differing parts of a company. Since the domain portion of the SID unique and the RID portion is an unsigned integer that only climbs, it’s pretty easy for RID masters to take care of that case in each domain.

You can read more about this in coma-inducing detail here: http://technet.microsoft.com/en-us/library/cc778824.aspx.

Question

When I want to set folder and application redirection for our user in different forest (with a forest trust) in our Remote Desktop Services server farm, I cannot find users or groups from other domain. Is there a workaround?

Answer

The Object Picker in this case doesn’t allow you to select objects from the other forest – this is a limitation of the UI the that Folder Redirection folks put in place. They write their own FR GP management tools, not the GP team.

Windows, by default, does not process group policy from user logon across a forest—it automatically uses loopback Replace.  Therefore, you can configure a Folder Redirection policy in the resource domain for users and link that policy to the OU in the domain where the Terminal Servers reside.  Only users from a different forest should receive the folder redirection policy, which you can then base on a group in the local forest.

Question

Does USMT support migrating multi-monitor settings from Windows XP computers, such as which one is primary, the resolutions, etc.?

Answer

USMT 4.0 does not supported migrating any monitor settings from any OS to any OS (screen resolution, monitor layout, multi-monitor, etc.). Migrating hardware settings and drivers from one computer to another is dangerous, so USMT does not attempt it. I strongly discourage you from trying to make this work through custom XML for the same reason – you may end up with unusable machines.

Starting in USMT 5.0, a new replacement manifest – Windows 7 to Windows 7, Windows 7 to Windows 8, or Windows 8 to Windows 8 only – named “DisplayConfigSettings_Win7Update.man” was added. For the first time in USMT, it migrates:

<pattern type="Registry">HKLM\System\CurrentControlSet\Control\GraphicsDrivers\Connectivity\* [*]</pattern>
<pattern type="Registry">HKLM\System\CurrentControlSet\Control\GraphicsDrivers\Configuration\* [*]</pattern>

This is OK on Win7 and Win8 because the OS itself knows what valid and invalid are in that context and discards/fixes things as necessary. I.e. this is safe is only because USMT doesn’t actually do anything but copy some values and relies on the OS to fix things after migration is over.

Question

Our proprietary application is having memory pressure issues and it manifests when someone runs gpupdate or waits for GP to refresh; some times it’s bad enough to cause a crash.  I was curious if there was a way to stop the policy refresh from occurring.

Answer

Only in Vista and later does preventing total refresh become possible vaguely possible; you could prevent the group policy service from running at all (no, I am not going to explain how). The internet is filled with thousands of people repeating a myth that preventing GP refresh is possible with an imaginary registry value on Win2003/XP – it isn’t.

What you could do here is prevent background refresh altogether. See the policies in the “administrative templates\system\group policy” section of GP:

1. You could enable policy “group policy refresh interval for computers” and apply it to that one server. You could set the background refresh interval to 45 days (the max). That way it be far more likely to reboot in the meantime for a patch Tuesday or whatever and never have a chance to refresh automatically.

2. You could also enable each of the group policy extension policies (ex: “disk quota policy processing”, “registry policy processing”) and set the “do not apply during periodic background processing” option on each one.  This may not actually prevent GPUPDATE /FORCE though – each CSE may decide to ignore your background refresh setting; you will have to test, as this sounds boring.

Keep in mind for #1 that there are two of those background refresh policies – one per user (“group policy refresh interval for users”), one per computer (“group policy refresh interval for computers”). They both operate in terms of each boot up or each interactive logon, on a per computer/per user basis respectively. I.e. if you logon as a user, you apply your policy. Policy will not refresh for 45 days for that user if you were to stay logged on that whole time. If you log off at 22 days and log back on, you get apply policy, because that is not a refresh – it’s interactive logon foreground policy application.

Ditto for computers, only replace “logon” with “boot up”. So it will apply the policy at every boot up, but since your computers reboot daily, never again until the next bootup.

After those thoughts… get a better server or a better app. :)

Question

I’m testing Virtualized Domain Controller cloning in Windows Server 2012 on Hyper-V and I have DCs with snapshots. Bad bad bad, I know, but we have our reasons and we at least know that we need to delete them when cloning.

Is there a way to keep the snapshots on the source computer, but not use VM exports? I.e. I just want the new copied VM to not have the old source machine’s snapshots.

Answer

Yes, through the new Hyper-V disk management Windows PowerShell cmdlets or through the management snap-in.

Graphical method

1. Examine the settings of your VM and determine which disk is the active one. When using snapshots, it will be an AVHD/X file.

image

2. Inspect that disk and you see the parent as well.

image

3. Now use the Edit Disk… option in the Hyper-V manager to select that AVHD/X file:

image

4. Merge the disk to a new copy:

image

image

Windows PowerShell method

Much simpler, although slightly counter-intuitive. Just use:

Convert-vhd

For example, to export the entire chain of a VM's disk snapshots and parent disk into a new single disk with no snapshots named DC4-CLONED.VHDX:

image
Violin!

You don’t actually have to convert the disk type in this scenario (note how I went from dynamic to dynamic). There is also Merge-VHD for more complex differencing disk and snapshot scenarios, but it requires some extra finagling and disk copying, and  isn’t usually necessary. The graphical merge option works well there too.

As a side note, the original Understand And Troubleshoot VDC guide now redirects to TechNet. Coming soon(ish) is an RTM-updated version of the original guide, in web format, with new architecture, troubleshooting, and other info. I robbed part of my answer above from it – as you can tell by the higher quality screenshots than you usually see on AskDS – and I’ll be sure to announce it. Hard.

Question

It has always been my opinion that if a DC with a FSMO role went down, the best approach is to seize the role on another DC, rebuild the failed DC from scratch, then transfer the role back. It’s also been my opinion that as long as you have more than one DC, and there has not been any data loss, or corruption, it is better to not restore.

What is the Microsoft take on this?

Answer

This is one of those “it depends” scenarios:

1. The downside to restoring from (usually proprietary) backup solutions is that the restore process just isn’t something most customers test and work out the kinks on until it actually happens; tons of time is spent digging out the right tapes, find the right software, looking up the restore process, contacting that vendor, etc. Often times a restore doesn’t work at all, so all the attempts are just wasted effort. I freely admit that my judgment is tainted through my MS Support experience here – customers do not call us to say how great their backups worked, only that they have a down DC and they can’t get their backups to restore.

The upside is if your recent backup contained local changes that had never replicated outbound due to latency, restoring them (even non-auth) still means that those changes will have a chance to replicate out. E.g. if someone changed their password or some group was created on that server and captured by the backup, you are not losing any changes. It also includes all the other things that you might not have been aware of – such as custom DFS configurations, operating as a DNS server that a bunch of machines were solely pointed to, 3rd party applications pointed directly to the DC by IP/Name for LDAP or PDC or whatever (looking at you, Open Source software!), etc. You don’t have to be as “aware”, per se.

2. The downside to seizing the FSMO roles and cutting your losses is the converse of my previous point around latent changes; those objects and attributes that could not replicate out but were caught by the backup are gone forever. You also might miss some of those one-offs where someone was specifically targeting that server – but you will hear from them, don’t worry; it won’t be too hard to put things back.

The upside is you get back in business much faster in most cases; I can usually rebuild a Win2008 R2 server and make it a DC before you even find the guy that has the combo to the backup tape vault. You also don’t get the interruptions in service for Windows from missing FSMO roles, such as DCs that were low on their RID pool and now cannot retrieve more (this only matters with default, obviously; some customers raise their pool sizes to combat this effect). It’s typically a more reliable approach too – after all, your backup may contain the same time bomb of settings or corruption or whatever that made your DC go offline in the first place. Moreover, the backup is unlikely to contain the most recent changes regardless – backups usually run overnight, so any un-replicated originating updates made during the day are going to be nuked in both cases.

For all these reasons, we in MS Support generallyrecommend a rebuild rather than a restore, all things being equal. Ideally, you fix the actual server and do neither!

As a side note, restoring the RID master usedto cause issues that we first fixed in Win2000 SP3. This unfortunately has live on as a myth that you cannot safely restore the RID master. Nevertheless, if someone impatiently seizes that role, then someone else restores that backup, you get a new problem where you cannot issue RIDs anymore. Your DC will also refuse to claim role ownership with a restored RID Master (or any FSMO role) if your restored server has an AD replication problem that prevents at least one good replication with a partner. Keep those in mind for planning no matter how the argument turns out!

Question

I am trying out Windows Server 2012 and its new Minimal Server Interface. Is there a way to use WMI to determine if a server is running with a Full Installation, Core Installation, or a Minimal Shell installation?

Answer

Indeed, although it’s not made it way to MSDN quite yet. The Win32_ServerFeature class returns a few new properties in our latest operating system. You can use WMIC or Windows PowerShell to browse the installed ones. For example:

image

The “99” ID is Server Graphical Shell, which means, in practical terms, “Full Installation”. If 99 alone is not present, that means it’s a minshell server. If the “478” ID is also missing, it’s a Core server.

E.g. if you wanted to apply some group policy that only applied to MinShell servers, you’d set your query to return true if 99 was not present but 478 was present.

Other Stuff

Speaking of which, Windows Server 2012 General Availability is September 4th. If you manage to miss the run up, you might want to visit an optometrist and/or social media consultant.

Stop worrying so much about the end of the world and think it through.

So awesome:


And so fake :(

If you are married to a psychotic Solitaire player who poo-poo’ed switching totally to the Windows 8 Consumer Preview because they could not get their mainline fix of card games, we have you covered now in Windows 8 RTM. Just run the Store app and swipe for the Charms Bar, then search for Solitaire.

image

It’s free and exactly 17 times better than the old in-box version:

image
OMG Lisa, stop yelling at me! 

Is this the greatest geek advert of all time?


Yes. Yes it is.

When people ask me why I stopped listening to Metallica after the Black Album, this is how I reply:

Hetfield in Milan
Ride the lightning Mercedes

We have quite a few fresh, youthful faces here in MS Support these days and someone asked me what “Mall Hair” was when I mentioned it. If you graduated high school between 1984 and 1994 in the Midwestern United States, you already know.

Finally – I am heading to Sydney in late September to yammer in-depth about Windows Server 2012 and Windows 8. Anyone have any good ideas for things to do? So far I’ve heard “bridge climb”, which is apparently the way Australians trick idiot tourists into paying for death. They probably follow it up with “funnel-web spider petting zoo” and “swim with the saltwater crocodiles”. Lunatics.

Until next time,

- Ned “I bet James Hetfield knows where I can get a tropical drink by the pool” Pyle

One of us: What it was like to interview for a support role at Microsoft

$
0
0

Hello, Kim here again. We get many questions about what to expect when interviewing at Microsoft. I’m coming up on my two year anniversary at Microsoft and I thought I would share my experience in the hope that it might help you if you are interested in applying to Microsoft Support; if nothing else, there is some educational and entertainment value in reading about me being interviewed by Ned. :)

Everyone at Microsoft has a unique story to tell about how they were hired. On the support side of Microsoft, many of us were initially hired as contractors and later offered a full-time position. Others were college hires, starting our first real jobs here. It seems some have just been here forever. Then there are a few of us, myself included, that were industry hires. Over the years, I've submitted my résumé to Microsoft a number of times. I have always wanted to work for Microsoft, but never really expected to be contacted since there aren’t many Microsoft positions available in central Indiana (where I’m from). I had a good job and wasn’t particularly unhappy in it, but the opportunity to move up was limited in my current role. I casually looked for a new position for a couple of months and had been offered one job, but it just didn't feel like the right fit. Around the same time, I submitted my résumé to Microsoft for a Support Engineer position on the Directory Services support team in Charlotte. Much to my surprise, I received an email that began a wild ride of excitement, anxiety, anticipation, and fear that ultimately resulted in my moving from the corn fields of the Midwest (there is actually more than corn in Indiana, btw) to the land of sweet tea.

I never expected that Microsoft would contact me due to the sheer volume of résumés they receive daily and the fact that the position was in Charlotte and I was not. About a week after I submitted my résumé, I received an email requesting a phone interview with the Directory Services team. I, of course, responded immediately and a phone interview was set up for three days from the current date. When I submitted my résumé, I didn’t think I’d be contacted and if I was, I definitely thought I’d have more than three days to prepare! The excitement lasted about 30 seconds before the reality of the situation set in . . . I was going to have an interview with Microsoft in three days! Just to add to the anxiety level, Ned Pyle (queue the Halloween theme) was going to do my phone screen!

Preparation - Phone Screen

I didn't know where to start to prepare. As with any phone screen, you have no idea what types of questions you will be asked. Would it be a technical interview; would it just be a review of my résumé and my qualifications? I didn’t know what to expect. I assumed that since Ned was calling me that there would be some technical aspect to it, but I wasn’t sure. There’s no wiki article on how to interview at Microsoft. :) On top of that, I'd heard rumors of questions about manhole covers and all kinds of other strange problem-solving questions. This was definitely going to be more difficult than any other interview I’d ever had.

Once I got over the initial panic, I decided I needed to start with the basics. This was a position for the Directory Services team, so I dug out all of the training books from the last eight years of working with Active Directory and put together a list of topics I knew I needed to review. I also did a Bing search on Active Directory Interview questions and I found a couple of lists of general AD questions. Finally, I went to the source, the AskDS blog, and searched for information on "hiring" and found a link to Post-Graduate AD Studies.

My resource list looked something like this:

1. Post-Graduate AD Studies (thanks, Ned)

2. O'Reilly Active Directory book (older version)

3. Training manual from Active Directory Troubleshooting course that was offered by MCS many years ago

4. Training manuals from a SANS SEC505 Securing Windows course

5. MS Press Active Directory Pocket Consultant

6. MS Press Windows Group Policy Guide

7. AD Interview Questions Bing search

   a) http://www.petri.co.il/mcse_system_administrator_active_directory_interview_questions.htm

   b) http://www.petri.co.il/mcse-system-administrator-windows-server-2008-r2-active-directory-interview-questions.htm

I only had three days to study, so I decided to start with reviewing the areas that I was weakest in and most comfortable with. For me, these were:

1. PKI (ugh)

2. AD Replication (good)

3. Kerberos (ick)

4. Authentication (meh)

5. Group Policy (very good)

The SANS manuals had good slides and decent descriptions, so that is where I started. Everyone has different levels of experience and different study habits. What works for me is writing. If I write something down, it seems to solidify it in my mind. I reviewed each of the topics above and focused on writing down the parts either that were new to me or that I needed to focus on in more detail. This approach meant that I was reading both the topics I already understood (as a refresher) and writing down the topics I needed to work on. Next, I went through the various lists of AD interview questions I had found and made sure that I could at least answer all of the questions at a high level. This involved doing some research for some of the questions. The websites with the lists of questions were a good resource because they didn’t give me the answers. I didn’t just want to be able to recite some random acronyms. I wanted to understand, at least at a high level, what all of the basic concepts were and be able to relate them to one another. I knew that I was going to need to have broad knowledge of many topics and then deep knowledge in others.

The worst part of all of this studying was that I didn't have enough lead-time to request time off from work to focus on it. So, while I was eating lunch, I was studying. While I was waiting on servers to build, I was studying. While I was waiting on VMs to clone, guess what? I was studying. :) By the end of the three days of studying, I was pretty much a nervous wreck and ready for this phone screen to end.

The Phone Screen

This is where you'd like me to tell you what questions Ned asked me, but . . . that isn't going to happen. Bwahahaha. :-)

What I can tell you about the interview is that it wasn't solely about rote knowledge, which is good since I had prepared for more than just how to spell AD & PKI. Knowing the high-level concepts was good; he asked a few random questions to see how far I could explain some of the technologies. It was more important to know what to do with this information and how to troubleshoot given what you know about a particular technology. If you can't apply the concepts to a real world scenario then the knowledge is useless. Throughout the interview, there were times where I couldn't come up with the right words or terms for something and I imagined Ned sitting there playing with his beard out of boredom.

image

In those situations, I found Ned was awake and tried to help me through them or skipped to something else that eventually got me back to the part I’d been struggling with but this time with better results. For that, I was grateful and it helped me keep my nerves in check as well. While trying to answer the flood of questions and keep my nerves in check, I tried to keep a list of the topics we were discussing just in case I got a follow-up interview. Although I’d like to say that I totally rocked out the phone interview and that I’m awesome (ok, I’m pretty cool), I actually thought I’d done alright, but not necessarily well enough to get a follow-up interview. Overall, I didn’t feel like I had been able to come up with responses quickly enough and Ned guided me around a couple of topics before I finally understood what he was getting at a few more times than I would have liked.

On-site interview scheduled - WOOT!

Much to my own disbelief, I did receive that follow-up email to schedule an in-person interview down in sunny Charlotte, NC. Fortunately, I had a little more time to prepare, mainly due to the nature of an on-site interview that is out of state. Logistics were in my favor this time! As I recall, I had about two weeks between when I received notification of the on-site interview and the actual scheduled interview date. This was definitely better than the three days I had to prepare for the phone screen.

With more time, I decided that I would take some days off work to focus on studying. Maybe this is extreme, but that is how important it was to me to get this job. I figured that this was my one shot to get this right and I was going to do everything I possibly could to ensure that I was as prepared as I could possibly be.

This time, I started studying with the list of questions from my phone interview with Ned. I wanted to make sure that if Ned was in my face-to-face interview that I would be able to answer those questions the second time. Then I reviewed all of the questions and notes that I had prepared for my phone interview. Finally, I really started digging in on the Post-Graduate AD Studies from the AskDS blog. I take full responsibility for the small forest of trees I killed in printing all of this material off. I read as much as I could of each of the Core Technology Reading and then I chose three or four areas from the Post Graduate Technology Reading to dig into deeper.

Obviously, I didn't study all day for two weeks. I'd read and then go for a short walk. As the time passed, I began to realize how long two weeks is. Having two weeks to prepare is awesome, but the stress of waking up every day knowing what you need to do and then dealing with the anxiety of just wanting it to be over is harder than I thought it would be. I tried to review my notes at least once a day and then read more of the in-depth content with the goal of ensuring that I had some relatively deep knowledge in some areas, knew the troubleshooting tools and processes, and for the areas I couldn’t go so deep into that I at least knew the lingo and how the pieces fit together. I certainly didn’t want to get all the way to Charlotte and have some basic question come at me and just sit there staring at the conference room table blankly. :-/

By the time I was ready to leave for my interview, I knew that I’d done everything I could to prepare and I just had to hope that the hard work paid off and that my brain cells held out for another day.

The On-site interview

I arrived in Charlotte the evening before the interview. I studied on the flight and then a little the night before. Again, just reviewing my notes and the SANS guide on PKI and Kerberos. I tried not to overdo it. If I wasn't ready at this point, I never would be.

I got to the site a little early that day, so I sat in the car and read more PKI and FRS notes. Then I took about 5 minutes and tried to relax and get my nerves under control (nice try).

The interview itself was intense. It was scheduled for an hour, but by the time I got out of the conference room I’d been in there two and a half hours. There were engineers and managers from both Texas (video conference) and Charlotte in the room. The questions pretty much started where we had left off from the phone interview in terms of complexity. I didn’t get a gimme on the starting point. I think we went for about an hour before they took pity on me and let me get more caffeine and started loading me up on chocolate. By the time I got to the management portion of the interview, I was shaking pretty intensely (probably from all that soda and chocolate that they kept giving me) and I was glad that I’d brought copies of my résumé so I could remember the last 10 years of my work history.

The thing that I appreciated most about the entire process was how understanding everyone was. They know how scary this can be and how nervous people are when they come in for an interview. Although I was incredibly nervous, everyone made me feel comfortable and I felt like they genuinely wanted me to succeed. The management portion of the interview was definitely easier, but they did ask some tough questions as well. I also made sure that I had come prepared with several questions of my own to ask them.

When I finally walked out of the conference room, I felt like a train had hit me. Emotionally I was shot, physically I was somewhere between wired and exhausted. It was definitely the most grueling interview I’d ever experienced, but I knew that I’d done everything I could to prepare. The coolest part happened as I was escorted to my car. As we were finishing our formalities, my host got a phone call on his cell phone and it was for me. This was probably the weirdest thing that had ever happened to me at an interview. I took his cell phone and it was one of the managers that had participated in my interview, she was calling to let me know that they were going to make me an offer and wanted to let me know before I left so I wouldn’t be worried about it all the way home on the plane. Getting that phone call before I left was an amazing feeling. I’d just been through a grueling interview that I’d spent weeks (really my entire career) preparing for and finding out my hard work had paid off was an unbelievable feeling. It didn’t become real until I got my blue badge a few days after my start date.

Hindsight is 20/20

Looking back at my career and my preparation for this role, is there anything that I would do differently to better prepare? Career-wise, I’d say that I did a good job of preparing for this role. I took increasingly more challenging roles from both a technical and a leadership perspective. I led projects that required me to be both the technical leader (designing, planning, testing, documenting a system) and a project leader (collaborating with other teams, managing schedules, reporting progress to management, dealing with road blocks and competing priorities). These experiences have given me insight and perspective on the environments and processes that my customers work with daily.

If I could do anything differently, I’d say that I would have dug in a little deeper on technologies that I didn’t deal with as part of my roles. For instance, learning more about SQL and IIS or even Exchange would have helped me better understand to what degree my technologies are critical to the functionality of others. Often our support cases center on the integration of multiple technologies, so having a better understanding of those technologies can be beneficial.

If you are newer to the industry, focusing on troubleshooting methodologies is a must. The job of support is to assist with troubleshooting in order to resolve technical issues. The entire interview process, from the phone-screen to the on-site interview, focused on my ability to be presented with a situation I am not familiar with and use my knowledge of technology and troubleshooting tools to isolate the problem. If you haven’t reviewed Mark Renoden’s post on Effective Troubleshooting, I highly recommend it. This is what being in support is all about.

Just don’t be these guys

So, what's it really like?

Working in support at Microsoft is by far the most technically demanding role I’ve had during the course of my career. Every day is a new challenge. Every day you work on a problem you’ve never seen before. It’s a lot like working in an Emergency room at times. Systems are down, businesses are losing money, the pressure is high and the expectations are even higher. Fortunately, not all cases are critsits (severity A) and the people I work with are amazing. My row is comprised of some of the most intelligent but “unique” people I’ve ever worked with. In ten minutes on the row, you can participate in a conversation about how the code in Group Policy chooses a Domain Controller for writes and which MIDI rendition of “Jump” is the best (for the record, they are all bad). While the cases are difficult and the pressure is intense, the work environment allows us to be ourselves and we are never short on laughs.

The last two years have been an incredible journey. I’ve learned more at Microsoft in two years than I did in five out in the industry. I get to work on some of the largest environments in the world and help people every day. While this isn't a prescription for how to prepare for an interview at Microsoft, it worked for me; and if you're crazy enough to want to work with Ned and the rest of us maybe it will work for you too. GOOD LUCK!

- Kim “Office 2013 has amazing beard search capabilities” Nichols

Updated Group Policy Search service

$
0
0

Mike here with an important service announcement.  In June of 2010, guest poster Kapil Mehra introduced the Group Policy Search service.  The Group Policy Search (GPS) service is a web application hosted on Windows Azure, which enables you to search for registry-based Group Policy settings used in Windows operating systems.

It’s a "plezz-shzaa" to announce that GPS version 1.1.4 is live at http://gps.cloudapp.net.  Version 1.1.4 includes registry-based policy settings from Windows 8 and Windows Server 2012, performance improvements, bug fixes, and a few little surprises.  It's the easiest way to search for a Group Policy setting. 

So, the next time you need to search for a Group Policy settings, or want to know the registry key and value name that backs a particular policy setting-- don't look for a antiquated settings spreadsheet reference.  Get your Group Policy Search on!!

And, if you act now-- we'll throw in the Group Policy Search Windows Phone 7 application-- for free! That's right, take Group Policy Search with you on the go. What an offer! Group Policy Search and Group Policy Search Windows Phone 7 application -- for one low, low price -- FREE!  Act now and you'll get free shipping.

This is Mike Stephens and "Ned Pyle" approves this message!

Windows Server 2012 GA

$
0
0

Hey folks, Ned here again to tell you what you probably already know: Windows Server 2012 is now generally available: 

I don’t often recommend “vision” posts, but Satya Nadella – President of Server and Tools – explains why we made the more radical changes in Windows Server 2012. Rather than start with the opening line, I’ll quote from the finish:

In the 1990s, Microsoft saw the need to democratize computing and made client/server computing available at scale, to customers of all sizes. Today, our goal is to do the same for cloud computing with Windows Server 2012.

On a more personal note: Mike Stephens, Joseph Conway, Tim Quinn, Chuck Timon, Don Geddes, and I dedicated two years to understanding, testing, bug stomping, design change requesting, documenting, and teaching Windows Server 2012. Another couple dozen senior support folks – such as our very own Warren Williams - spent the last year working with customers to track down issues and get feedback. Your feedback. You will see things in Directory Services that were requested through this blog.

Having worked on a number of pre-release products, this is the most Support involvement in any Windows operating system I have ever seen. When combined with numerous customer and field contributions, I believe that Windows Server 2012 is the most capable, dependable, and supportable product we’ve ever made. I hope you agree.

- Ned “also, any DS issues you find were missed by Mike, not me” Pyle

Let the Blogging begin…

$
0
0

Hello AskDS Readers. Mike here again. If you notice, Ned posted one of our first Windows Server 2012 RTM blogs a while back (Managing RID Issuance in Windows Server 2012). Yes friends, the gag order has been lifted and we are allowed to spout mountains of technical goodness about Windows Server 2012 and Windows 8.

"So much time and so little to do. Wait a minute. Strike that. Reverse it." Windows Server 2012 has many cool features that Ned and I have been waiting to share with you. Here is a 50,000-foot view of the technologies and features we are going to blog in the next few weeks and months-- in no specific order.

I'll start by highlighting some of the changes with security, PKI, authentication, and authorization. The Windows Server 2012 Certificate Services role has a few feature changes that should delight many of the certificate administrators out there. With new installation, deployment, and improved configuration-- it's probably the easiest certificate authority to configure.

Windows Server 2012 authentication is a healthy technology with a ton of technical goo just seeping at the seams; starting with the mac-daddy of them all-- Kerberos. In a few weeks, we will begin publishing the first of many installments of Kerberos changes in Windows 8/Windows Server 2012. As a teaser, the lineup includes KDC Proxy Server, the latest and greatest way to configured Kerberos Constrained Delegation-- "It really whips the lama's @#%." We'll take some exhaustive time explaining some Kerberos enhancements such as Kerberos Armoring and Compound Identity. We have tons more to share in the area of authentication including Virtual Smartcard Readers, and Picture Password logon.

Advanced client security highlights features like Server Name Indicator (SNI) for Windows Server 2012, Certificate Lifecycle Notification, Weak Key Protection (most of which is published in Jonathan Stephen's latest blog, RSA Key Blocking is Here!), Implicit binding, which is the infrastructure behind the new Centralized Certificate Store IIS feature, and Client certificate hints. Advanced client security also includes a wicked-cool security-enhancement to PFX files and new a PKI module for Windows PowerShell

At some point in our publishing timeline, we'll launch into the saga of all sagas, Dynamic Access Control. We've hosted guest posts here on AskDS to introduce this radical, amazingly cool new way to perform file-based authorization. This isn't your grandfather's authorization either. Dynamic Access Control or DAC as we’ll call it, requires planning, diligence, and an understanding of many dependencies, such as Active Directory, Kerberos, and effective access. Did I mention there are many knobs you must turn to configure it? No worries though, we'll break DAC down into consumable morsels that should make it easy for everyone to understand.

The concept of claims continues by showing you how to use Windows Server 2012's Active Directory Federation Services role to leverage claims issued by Windows domain controllers. Using AD FS, you can pass-through the Windows authorization claims or transform them into well-known SAML-based claim types.

No, I'm not done yet. I'm going introduce a well-hidden feature that hasn't received much exposure, but has been labeled "pretty cool" by many training attendees. Access Denied Assistance is a gem of a feature that is locked away within the File Server Resource Manager (FSRM). It enables you to provide a SharePoint-like experience for users in Windows Explorer when they experience access denied or file not found to a shared file or folder. Access Denied Assistance provides the user with a "Request Access" interface that sends an email to the share owner that provides details on the access requested and guidance for the share owner can follow to remediate the problem. It's very slick.

Wait there is more; this is just my list of topics to cover. Ned has a fun-bag full of Active Directory related material that he'll intermix with these topics to keep things fresh. I'm certain we'll sneak in a few extras that may not be directly related to Directory Services; however, they will help you make your Windows Server 2012 and Windows 8 experience much better. Need to run for now, this blog post just wrote checks my body can't cash.

The line above and below this were intentionally left blank using Microsoft Word 2013 Preview Edition

Mike "There's no earthly way of knowing; which direction they are going... There's no knowing where they're rowing..." Stephens

MaxTokenSize and Windows 8 and Windows Server 2012

$
0
0

Hello AskDS Populous, Mike here and I want to share with you some of the excellent enhancements we accomplished in Windows 8 and Windows Server 2012 around MaxTokenSize. Let’s review MaxTokenSize and its symptoms before we jump in to wonderful world of Windows 8 (say that three times fast).

Wonderful World of Windows 8
Wonderful World of Windows 8
Wonderful World of Windows 8

What is MaxTokenSize

Kerberos is the default and preferred authentication protocol since the release of Windows 2000 Server. Over the last few years, Microsoft has made some significant investments in provided extensions to the protocol. One of those extensions to Kerberos is the Privilege Attribute Certificate or PAC (defined in Windows Server Protocol specification MS-PAC).

Microsoft created the PAC to encapsulate authorization related information in a manner consistent with RFC4120. The authorization information included in the PAC includes security identifiers, user profile information such as Full name, home directory, and bad password count. Security identifiers (SIDs) included in the PAC represent the user's current SID and any instances of SID history and security group memberships to the extent of current domain groups, resource domain groups, and universal groups.

Kerberos uses a buffer to store authorization information and reports this size to applications using Kerberos for authentication. MaxTokenSize is the size of buffer used to store authorization information. This buffer size is important because some protocols such as RPC and HTTP use it when they allocate memory for authentication. If the authorization data for a user attempting to authenticate is larger than the MaxTokenSize, then the authentication fails for that connection using that protocol. This explains why authentication failures resulted when authenticating to IIS but not when authenticating to folder shared on a file server. The default buffer size for Kerberos in Windows 7 and Windows Server 2008R2 is 12k.

Windows 8 and Windows Server 2012

Let's face the facts of today's IT environment… authentication and authorization is not getting easier; it's becoming more complex. In the world of single sign-on and user claims, the amount of authorization data is increasing. Increasing authorization data in an infrastructure that has already had its experiences with authentication failures because a user was a member of too many groups justifies some concern for the future. Fortunately, Windows 8 and Windows Server 2012 have features to help us take proactive measures to avoid the problem.

Default MaxTokenSize

Windows 8 and Windows Server 2012 benefit from an increased MaxTokenSize of 48k. Therefore, when HTTP relies on the MaxTokenSize value as the value used for memory allocation; it will allocate 48k of memory for the authentication buffer, which hold a substantially more authorization information than in previous versions of Windows where the default MaxTokenSize was only 12k.

Group Policy settings

Windows 8 and Windows Server 2012 introduce two new computer-based policy settings that help combat against large service tickets, which is the cause of the MaxTokenSize dilemma. The first of these policy settings is not exactly new-- it has been in Windows for years, but only as a registry value. Use the policy setting Set maximum Kerberos SSPI context token buffer size to change the MaxTokenSize using group policy. Looking closely at this policy setting in the Group Policy Management Editor, you'll notice the icon for this setting is slightly different from the others around it.

clip_image001

This difference is attributed to registry location the policy setting modifies when enabled or disabled. This registry setting is the actual MaxTokenSize registry key and value name that has been used in earlier versions of Windows

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\Kerberos\Parameters\MaxTokenSize

Therefore, you can use this computer-based policy setting to manage Windows 8, Windows Server 2012, and earlier versions of Windows. The catch here is that this registry location is not a managed policy location. Managed policy locations are removed and reapplied during policy refreshes to avoid persistent settings in the registry after the settings in a Group Policy object become out of scope. That behavior does not occur with this key, as the setting applied by this policy setting is not removed during application. Therefore, the policy setting persists even if the Group Policy object providing the setting falls out of scope.

The second policy setting is very cool and answers the question that customers always asked when they encounter a problem with MaxTokenSize: "How big is the token?" You might be one of those people that went on the crusade of a lifetime using TOKENSZ.EXE and spent countless hours trying to determine the optimal MaxTokenSize for your environment. Those days are gone.

A new KDC policy settings Warning events for large Kerberos tickets provides you with a way to monitor the size of Kerberos tickets issued by KDCs. When you enable this policy setting, you then must configure a ticket threshold size. The KDC uses the ticket threshold size to determine if it should write a warning event to the system event log. If the KDC issues a ticket that exceeds the ticket threshold size, then it writes a warning. This policy setting, when enabled, defaults to the 12k, which is the default MaxTokenSize of previous version of Windows.

clip_image003

Ideally, if you use this policy setting, then you'd likely want to set the ticket threshold value to approximately 1k less than your current MaxTokenSize. You want it lower than your current MaxTokenSize (unless you are using 12k, that is the minimum value) so you can use the warning events as a proactive measure to avoid an authentication failure due to an incorrectly sized buffer. Setting the threshold too low will just train you to ignore the Event 31 warnings because they'll become noise in the event log. Setting it too high and you're likely to be blindsided with authentication failures rather than warning events.

clip_image004

Earlier I said that this policy setting solves your problems with fumbling with TOKENSZ and other utilities to determine MaxTokenSize-- here's how. If you examine the details of the Kerberos-Key-Distribution-Center Warning event ID 31, you'll notice that it gives you all the information you need to determine the optimal MaxTokenSize in your environment. In the following example, the user Ned is a member of over 1000 groups (he's very popular and a big deal on the Internet). When I attempt to log on Ned using the RUNAS command, I generated an Event ID 31. The event description provides you with the service principal name, the user principal name, the size of the ticket requested and the size of the threshold. This enables you to aggregate all the event 31s and identify the maximum ticket size requested. Armed with this information, you can set the optimal MaxTokenSize for your environment.

clip_image006

KDC Resource SID Compression

Kerberos authentication inserts security identifiers (SIDs) of the security principal, SID history, all the groups to which the user is a member including universal groups and groups from the resource domain. Security principals with too many group memberships greatly affect the size of the authentication data. Sometimes the authentication data is larger than the allocated size reported by Kerberos to applications. This can causes authentication failure in some applications. SIDs from the resource domain share the same domain portion of the SID, these SIDs can be compressed by only providing the resource domain SID once for all SIDs in the resource domain.

Windows Server 2012 KDCs help reduce the size of the PAC by taking advantage of resource SID compression. By default, a Windows Server 2012 KDC will always compress resource SIDs. To compress resource SIDs, the KDC stores SID of the resource domain to which the target resource is a member.  Then, it inserts only the RID portion of each resource SID into the ResourceGroupIds portion of the authentication data. 

Resource SID Compression reduces the size of each stored instance of a resource SID because the domain SID is stored once rather than with each instance. Without resource SID Compression, the KDC inserts all the SIDs added by the resource domain in the Extra-SID portion of the PAC structure, which is a list of SIDs.  [MS-KILE]

Interoperability

Other Kerberos implementations may not understand resource group compression and therefore are not compatible. In these scenarios, you may need to disable resource group compression to allow the Windows Server 2012 KDC to interoperate with the third-party Kerberos implementation.

Resource SID compression is on by default; however, you can disable it. You disable resource SID compression on a Windows Server 2012 KDC using the DisableResourceGroupsFields registry value under the HKLM\Software\Microsoft\Windows\CurrentVersion\Policies\System\Kdc\Parameters registry key. This registry value has a DWORD registry value type. You completely disable resource SID compression when you set the registry value to 1. The KDC reads this configuration when building a service ticket. With the bit enabled, the KDC does not use resource SID compression when building the service ticket.

Wrap up

There's the skinny on the Kerberos enhancements included in Windows 8 and Windows Server 2012 that specifically target large service ticket and MaxTokenSize scenarios. To summarize:

· Increased default MaxTokenSize from 12k to 48k

· New Group Policy setting to centrally manage MaxTokenSize

· New Group Policy setting to write warnings to the system event log when a service ticket exceeds a designated threshold

· New Resource SID compression to reduce the storage size of SIDs from the resource domain

Keep an eye out for more Windows 8 and Kerberos needful

Mike "~Mike" Stephens


Monthly Mail Sack: I Hope Your Data Plan is Paid Up Edition

$
0
0

Hi all, Ned here again with that thing we call love. Blog! I mean blog. I have a ton to talk about now that I have moved to the monthly format, and I recommend you switch to WIFI if you’re on your phone.

This round I answer your questions on:

I will bury you!

image
With screenshots!

Question

Is there a way to associate a “new” domain controller with an “existing” domain controller account in Active Directory? I.e. if I have a DC that is dead and has to be replaced, I have to metadata clean the old DC out before I promote a replacement DC with the same name.

Answer

You can “reinstall” DCs, attaching an existing objects that were not removed by demotion/MD cleanup. In Windows Server 2012 this is detected and handled by the AD DS config wizard right after you choose a replica DC and get to the DC Options page, or with the Install-AddsDomainController cmdlet using the -AllowDomainControllerReinstall argument.

image
Neato

If using an older operating system, no such luck (this actually existed in dcpromo.exe /unattend in 2008 R2, but didn't work AFAIK). You should use DSA.MSC or NTDSUTIL to metadata cleanup that old domain controller before promoting its replacement.

Question

I’ve read in the past – from you - that DFSR using SYSVOL supports the change notification flag on AD DS replication links or connection objects. Is this true? I am finding very inconsistent behavior.

Answer

Not really (and I updated my old writing on this– yes, Ned can be wrong).

DFSR always replicates immediately and continuously with its own internal change notification, as long as the schedule is open; these scheduled windows are in 15 minute blocks and are assigned on the AD DS connection objects.

If the current time matches an open block, you replicate continuously (as fast as possible, sending DFSR change notifications) until that block closes.

If the next block is closed, you wait for 15 minutes, sending no updates at all. If that next block had also been open, you continue replicating at max speed. Therefore, to replicate with change notification, set the connection objects to use a fully opened window. For example:

image

To make DFSR SYSVOL slower, you must close the replication schedule windows on the connections. But since the historical scenario is a desire to make group policy/script replication faster - and since it is better that SYSVOL beat AD DS, since SYSVOL contains files called once AD DS is updated - this scenario is less likely or important. Not to mention that ideally, SYSVOL is pretty static.

Question

I was using the new graphical Fine Grained Password Policy in Windows Server 2012 AD Administrative Center. I realized that it lets me set a minimum password length of 255 characters.

image

When I edit group policy in GPMC, it doesn’t let me set a minimum of more than 14 characters!

image

Did I find a bug?

Answer

Nope. The original reason around the 14 character password was to force users to set a 15 character password and force the removal of LM password hashes (which is sort of silly at this point, as we have a security setting called Do not store LAN Manager hash value on next password change that makes this moot and is enabled by default in our later operating systems). The security policy editor enforces the 14 character limit, but this is not the actual limit. You can use ADSIEDIT to change it, for example, and that will work.

The true maximum limit in Active Directory for your password is 255 unicode characters and that’s what ADAC is enforcing. But many pieces of Windows software limit you to 127 character passwords, or even fewer; for example, the NET USE command: if you set a password to 254 characters and then attempt to map a drive with NET USE, it ignores the other characters beyond 127 and you always receive “unknown user name or bad password.” So be careful here.

It goes without saying that if you are requiring a minimum password length of even 25 characters, you are kind of a jerk :-D. Time for smartcard logons, dudes and dudettes; there is no way your users are going to remember passwords that long and it will be on Post-It notes all over their cubicles.

Totally unrelated note: the second password shown here is exactly 127 characters:

image
Awesome

Question

I am using USMT 4.0 and running scanstate on a computer with multiple fixed hard drives, like C:, D:, E:. I want to migrate to new Windows 7 machines that only have a C: drive. Do I need to create a custom XML file?

Answer

I could have sworn I wrote something up on this before but darned if I can find it. The short answer is – use migdocs.xml and it will all magically work. The long answer and demonstration of behavior is:

1. I have a computer with C: and D: fixed drives (OS is unimportant, USMT 4.0 or later).

2. On the C: drive I have two custom folders, each with a custom file.

clip_image001

3. On the D: drive I have two custom folders, each with a custom file.

clip_image001[5]

4. One of the folders is named the same on both drives, with a file that is named the same in that folder, but contains different contents.

clip_image002

clip_image003

5. Then you scanstate with no hardlinks (e.g. scanstate c:\store /i:migdocs.xml /c /o)

6. Then you go to a machine with only a C: drive (in my repro I was lazy and just deleted my D: drive) and copy the store over.

7. Run loadstate (e.g. loadstate c:\store /i:migdocs.xml /c)

8. Note how the folders on D: are migrated into C:, merging the folders and creating renamed copies of files when there are duplications:

clip_image004 clip_image005

clip_image006

clip_image007

Question

Where does Active Directory get computer specific information like Operating System, Service Pack level, etc., for computer accounts that are joined to the domain? I'm guessing WMI but I'm also wondering how often it checks.

Answer

AD gets it from attributes (for example).

AD relies on the individual Windows computers to take care of it – such as when joining the domain, being upgraded, being service packed, or after reboot. Nothing in AD confirms it or maintains outside those “client” processes, so if I change my OS version info using ADSIEDIT, that's the OS as far as AD is concerned and it's not going to change back unless the Windows computer makes it happen. Which it will!

Here I change a Win2008 R2 server to use nomenclature similar to our Linux and Apple competitors:

image

And here it is after I reboot that computer:

image

That would be a good band name, now that I think about it.

Question

I’d like to add a DFSR file replication filter but I have hundreds of RFs and don’t want to click around Dfsmgmt.msc for days. Is there a way to set this globally for entire replication groups?

Answer

Not per se; DFSR file filters are set on each replicated folder in Active Directory.

But setting it via a Windows PowerShell loop is not hard. For example, in Win2008 R2, where I imported the activedirectory module - here I am (destructively!) setting a filter to match the defaults plus add a new extension on all RFs in this domain:

image

Question

Is there a way to export and import the DFS Replication configuration the way we do for DFSN? It seems like no but I want to make sure I am not missing anything.

Answer

DFSRADMIN LIST shows the configuration and there are a couple export/import commands for scheduling. But overall this is going to be a semi-manual process for you unless they write their own tool or scripts. Ultimately, it’s all just LDAP data, after all – this is how frs2dfsr.exe works.

Once you list and inventory everything, the DFSRADMIN BULK command is useful to recreate things accurately.

Question

Does USMT migrate Internet Explorer Autocomplete Settings?

image

Answer

I really should make you figure this out for yourself… but I am feeling pleasant today. These settings are all here:

image
Hint hint – Process Monitor is always your friend with custom USMT coding

Looking at the USMT 5.0 replacement manifest:

  • MICROSOFT-WINDOWS-IE-INTERNETEXPLORER-REPL.MAN (from Windows 8)

I see that we do get the \Internet Explorer\and all sub-data (including Main and DomainSuggestion) for those specific registry values with no exclusions. We also get the Explorer\Autocomplete in that same manifest, likewise without exclusion.

  • MICROSOFT-WINDOWS-IE-INTERNETEXPLORER-DL.MAN (from XP)

Ditto. We grab all this as well.

Question

I have read that Windows Server 2008 R2 has the following documented and supported DFSR limits:

The following list provides a set of scalability guidelines that have been tested by Microsoft on Windows Server 2008 R2 and Windows Server 2008:

  • Size of all replicated files on a server: 10 terabytes.
  • Number of replicated files on a volume: 8 million.
  • Maximum file size: 64 gigabytes.

Source: http://technet.microsoft.com/en-us/library/f9b98a0f-c1ae-4a9f-9724-80c679596e6b(v=ws.10)#BKMK_00

What happens if I exceed these limits? Should I ever consider exceeding these limits? I want to use much more than these limits!

(Asked by half a zillion customers in the past few weeks)

Answer

With more than 10TB or 8 million files, the support will only be best effort (i.e. you can open a support case and we will attempt to assist, but they may reach a point where have to say “this configuration is not supported” and we cannot assist further). If you need us to fully support more end-to-end, you need a solution different than Win2008 R2 DFSR.

To exceed the 10TB limit – which again, is not supported nor recommended – seriously consider:

  1. High reliability fabric to high reliability storage– i.e. do not use iSCSI. Do not use cheap disk arrays. Dedicated fiber or similar networks only with redundant paths, to a properly redundant storage array that costs a poop-load of money.
  2. Store no more than 2TB per volume– There is one DFSR database per volume, which means if there is a dirty shutdown, recovery affects all replicated data on that volume. 1TB max would be better.
  3. Latest DFSR hotfixes at all timeshttp://support.microsoft.com/kb/968429. This especially includes using http://support.microsoft.com/kb/2663685, combined with read-only replication when possible.

Actually, just read Warren’s common DFSR mistakes post 10 times. Then read it 10 more times.

Hmm… I recommend all these even when under 10TB…

Other stuff

RSAT for Windows 8 RTM is… RTM. Grab it here.

I mentioned mall hair in last month’s mail sack. When that sort of thing happen in MS Support, colleagues provide helpful references:

clip_image001
I hate you, Justin

Speaking of the ridiculous group I work with, this what you get when Steve Taylor wants to boost team morale on a Friday:


Couldn’t they just have the bass player record one looped note?

Canada, what the heck happened?!

clip_image002[5]

Still going…

clip_image003[5]

I mean… Norway? NORWAY IN THE SUMMER GAMES? They eat pickled herring and go sledding in June! I’ll grant that if you switch to medal count, you’re a respectable 13th. Good work, America’s Hat.

In other news bound to depress canucks, the NHL is about to close up shop yet again. Check out this hilarious article courtesy of Mark.

 

Finally

I am heading out to Redmond next week to teach a couple days of Certified DS Master, then on to San Francisco and Sydney to vacate and yammer even more. I’ll be back in a few weeks; Jonathan will answer your questions in the meantime and I think Mike has posts aplenty to share. When I return – and maybe before – I will have some interesting news to share.

See you in a few weeks.

- Ned “don’t make me take off my shoe” Pyle

Windows Server 2012 Shell game

$
0
0

Here's the scenario, you just downloaded the RTM ISO for Windows Server 2012 using your handy, dandy, "wondermus" Microsoft TechNet subscription. Using Hyper-V, you create a new virtual machine, mount the ISO and breeze through the setup screen until you are mesmerized by the Newton's cradle-like experience of the circular progress indicator

clip_image002

Click…click…click…click-- installation complete; the computer reboots.

You provide Windows Server with a new administrator password. Bam: done! Windows Server 2012 presents the credential provider screen and you logon using the newly created administrator account, and then…

Holy Shell, Batman! I don't have a desktop!

clip_image004

Hey everyone, Mike here again to bestow some Windows Server 2012 lovin'. The previously described scenario is not hypothetical-- many have experienced it when they installed the pre-release versions of Windows Server 2012. And it is likely to resurface as we move past Windows Server 2012 general availability on September 4. If you are new to Windows Server 2012, then you're likely one of those people staring at a command prompt window on your fresh installation. The reason you are staring at command prompt is that Windows Server 2012's installation defaults to Server Core and in your haste to try out our latest bits, you breezed right past the option to change it.

This may be old news for some of you, but it is likely that one or more of your colleagues is going to perform the very actions that I describe here. This is actually a fortunate circumstance as it enables me to introduce a new Windows Server 2012 feature.

clip_image006

There were two server installation types prior to Windows Server 2012: full and core. Core servers provide a low attack surface by removing the Windows Shell and Internet Explorer completely. However, it presented quite a challenge for many Windows administrators as Windows PowerShell and command line utilities were the only methods used to manage the servers and its roles locally (you could use most management consoles remotely).

Those same two server installation types return in Windows Server 2012; however, we have added a third installation type: Minimal Server Interface. Minimal Server Interface enables most local graphical user interface management tasks without requiring you to install the server's user interface or Internet Explorer. Minimal Server Interface is a full installation of Windows that excludes:

  • Internet Explorer
  • The Desktop
  • Windows Explorer
  • Windows 8-style application support
  • Multimedia support
  • Desktop Experience

Minimal Server Interface gives Windows administrators - who are not comfortable using Windows PowerShell as their only option - the benefit a reduced attack surface and reboot requirement (i.e., on Patch Tuesday); yet GUI management while the ramp on their Windows PowerShell skills.

clip_image008

"Okay, Minimal Server Interface seems cool Mike, but I'm stuck at the command prompt and I want graphical tools. Now what?" If you were running an earlier version of Windows Server, my answer would be reinstall. However, you're running Windows Server 2012; therefore, my answer is "Install the Server Graphical Shell or Install Minimal Server Interface."

Windows Server 2012 enables you to change the shell installation option after you've completed the installation. This solves the problem if you are staring at a command prompt. However, it also solves the problem if you want to keep your attack surface low, but simply are a Windows PowerShell guru in waiting. You can choose Minimal Server Interface ,or you can decided to add the Server Graphical Interface for a specific task, and then remove it when you have completed that management task (understand, however, that switching between the Windows Shell requires you to restart the server).

Another scenario solved by the ability to add the Server Graphical Shell is that not all server-based applications work correctly on server core, or you cannot management them on server core. Windows Server 2012 enables you to try the application on Minimal Server Interface and if that does not work, and then you can change the server installation to include the Graphical Shell, which is the equivalent of the Server GUI installation option during the setup (the one you breezed by during the initial setup).

Removing the Server Graphical Shell and Graphical Management Tools and Infrastructure

Removing the Server shell from a GUI installation of Windows is amazingly easy. Start Server Manager, click Manage, and click Remove Roles and Features. Select the target server and then click Features. Expand User Interfaces and Infrastructure.

To reduce a Windows Server 2012 GUI installation to a Minimal Server Interface installation, clear the Server Graphical Shell checkbox and complete the wizard. To reduce a Windows Server GUI installation to a Server Core installation, clear the Server Graphical Shell and Graphical Management Tools and Infrastructure check boxes and complete the wizard.

clip_image010

Alternatively, you can perform these same actions using the Server Manager module for Windows PowerShell, and it is probably a good idea to learn how to do this. I'll give you two reasons why: It's wicked fast to install and remove features and roles using Windows PowerShell and you need to learn it in order to add the Server Shell on a Windows Core or Minimal Server Interface installation.

Use the following command to view a list of the Server GUI components

clip_image011

Get-WindowsFeature server-gui*

Give your attention to the Name column. You use this value with the Remove-WindowsFeature and Install-WindowsFeature PowerShell cmdlets.

To remove the server graphical shell, which reduces the GUI server installation to a Minimal Server Interface installation, run:

Remove-WindowsFeature Server-Gui-Shell

To remove the Graphical Management Tools and Infrastructure, which further reduces a Minimal Server Interface installation to a Server Core installation.

Remove-WindowsFeature Server-Gui-Mgmt-Infra

To remove the Graphical Management Tools and Infrastructure and the Server Graphical Shell, run:

Remove-WindowsFeature Server-Gui-Shell,Server-Gui-Mgmt-Infra

Adding Server Graphical Shell and Graphical Management Tools and Infrastructure

Adding Server Shell components to a Windows Server 2012 Core installation is a tad more involved than removing them. The first thing to understand with a Server Core installation is the actual binaries for Server Shell do not reside on the computers. This is how a Server Core installation achieves a smaller footprint. You can determine if the binaries are present by using the Get-WindowsFeature Windows PowerShell cmdlets and viewing the Install State column. The Removed value indicates the binaries that represent the feature do not reside on the hard drive. Therefore, you need to add the binaries to the installation before you can install them. Another indicator that the binaries do not exist in the installation is the error you receive when you try to install a feature that is removed. The Install-WindowsFeature cmdlet will proceed along as if it is working and then spend a lot of time around 63-68 percent before returning an error stating that it could not add the feature.

clip_image015

To stage Server Shell features to a Windows Core Installation

You need to get our your handy, dandy media (or ISO) to stage the binaries into the installation. Windows installation files are stored in WIM files that are located in the \sources folder of your media. There are two .WIM files on the media. The WIM you want to use for this process is INSTALL.WIM.

clip_image017

You use DISM.EXE to display the installation images and their indexes that are included in the WIM file. There are four images in the INSTALL.WIM file. Images with the index of 1 and 3 are Server Core installation images for Standard and Datacenter, respectively. Images with the indexes 2 and 4 are GUI installation of Standards and Datacenter, respectively. Two of these images contain the GUI binaries and two do not. To stage these binaries to the current installation, you need to use indexes 2 and 4 because these images contain the Server GUI binaries. An attempt to stage the binaries using indexes 1 or 3 will fail.

You still use the Install-WindowsFeature cmdlets to stage the binaries to the computer; however, we are going to use the -source argument to inform Install-WindowsFeature the image and index it should use to stage the Server Shell binaries. To do this, we use a special path syntax that indicates the binaries reside in a WIM file. The Windows PowerShell command should look like

Install-WindowsFeature server-gui-mgmt-infra,server-gui-shell -source:wim:d:\sources\install.wim:4

Pay particular attention to the path supplied to the -source argument. You need to prefix the path to your installation media's install.wim file with the keyword wim: You need to suffix the path with a :4, which represents the image index to use for the installation. You must always use an index of 2 or 4 to install the Server Shell components. The command should exhibit the same behavior as the previous one and proceeds up to about 68 percent, at which point it will stay at 68 percent for a quite a bit, (if it is working). Typically, if there is a problem with the syntax or the command it will error within two minutes of spinning at 68 percent. This process stages all the graphical user interface binaries that were not installed during the initial setup; so, give it a bit of time. When the command completes successfully, it should instruct you to restart the server. You can do this using Windows PowerShell by typing the Restart-Computer cmdlets.

clip_image019

Give the next reboot more time. It is actually updating the current Windows installation, making all the other components aware the GUI is available. The server should reboot and inform you that it is configuring Windows features and is likely to spend some time at 15 percent. Be patient and give it time to complete. Windows should reach about 30 percent and then will restart.

clip_image021

It should return to the Configuring Windows feature screen with the progress around 45 to 50 percent (these are estimates). The process should continue until 100 percent and then should show you the Press Ctrl+Alt+Delete to sign in screen

clip_image023

Done

That's it. Consider yourself informed. The next time one of your colleagues gazes at their accidental Windows Server 2012 Server Core installation with that deer-in-the-headlights look, you can whip our your mad Windows PowerShell skills and turn that Server Core installation into a Minimal Server Interface or Server GUI installation in no time.

Mike

"Voilà! In view, a humble vaudevillian veteran, cast vicariously as both victim and villain by the vicissitudes of Fate. This visage, no mere veneer of vanity, is a vestige of the vox populi, now vacant, vanished. However, this valorous visitation of a by-gone vexation, stands vivified and has vowed to vanquish these venal and virulent vermin van-guarding vice and vouchsafing the violently vicious and voracious violation of volition. The only verdict is vengeance; a vendetta, held as a votive, not in vain, for the value and veracity of such shall one day vindicate the vigilant and the virtuous. Verily, this vichyssoise of verbiage veers most verbose, so let me simply add that it's my very good honor to meet you and you may call me V."

Stephens

AD FS 2.0 RelayState

$
0
0

Hi guys, Joji Oshima here again with some great news! AD FS 2.0 Rollup 2 adds the capability to send RelayState when using IDP initiated sign on. I imagine some people are ecstatic to hear this while others are asking “What is this and why should I care?”

What is RelayState and why should I care?

There are two protocol standards for federation (SAML and WS-Federation). RelayState is a parameter of the SAML protocol that is used to identify the specific resource the user will access after they are signed in and directed to the relying party’s federation server.
Note:

If the relying party is the application itself, you can use the loginToRp parameter instead.
Example:
https://adfs.contoso.com/adfs/ls/idpinitiatedsignon.aspx?loginToRp=rpidentifier

Without the use of any parameters, a user would need to go to the IDP initiated sign on page, log in to the server, choose the relying party, and then be directed to the application. Using RelayState can automate this process by generating a single URL for the user to click and be logged in to the target application without any intervention. It should be noted that when using RelayState, any parameters outside of it will be dropped.

When can I use RelayState?

We can pass RelayState when working with a relying party that has a SAML endpoint. It does not work when the direct relying party is using WS-Federation.

The following IDP initiated flows are supported when using Rollup 2 for AD FS 2.0:

  • Identity provider security token server (STS) -> relying party STS (configured as a SAML-P endpoint) -> SAML relying party App
  • Identity provider STS -> relying party STS (configured as a SAML-P endpoint) -> WIF (WS-Fed) relying party App
  • Identity provider STS -> SAML relying party App

The following initiated flow is not supported:

  • Identity provider STS -> WIF (WS-Fed) relying party App

Manually Generating the RelayState URL

There are two pieces of information you need to generate the RelayState URL. The first is the relying party’s identifier. This can be found in the AD FS 2.0 Management Console. View the Identifiers tab on the relying party’s property page.

image

The second part is the actual RelayState value that you wish to send to the Relying Party. It could be the identifier of the application, but the administrator for the Relying Party should have this information. In this example, we will use the Relying Party identifier of https://sso.adatum.com and the RelayState of https://webapp.adatum.com

Starting values:
RPID: https://sso.adatum.com
RelayState: https://webapp.adatum.com

Step 1: The first step is to URL Encode each value.

RPID: https%3a%2f%2fsso.adatum.com
RelayState: https%3a%2f%2fwebapp.adatum.com

Step 2: The second step is to take these URL Encoded values, merge it with the string below, and URL Encode the string.

String:
RPID=<URL encoded RPID>&RelayState=<URL encoded RelayState>

String with values:
RPID= https%3a%2f%2fsso.adatum.com &RelayState= https%3a%2f%2fwebapp.adatum.com

URL Encoded string:
RPID%3dhttps%253a%252f%252fsso.adatum.com%26RelayState%3dhttps%253a%252f%252fwebapp.adatum.com

Step 3: The third step is to take the URL Encoded string and add it to the end of the string below.

String:
?RelayState=

String with value:
?RelayState=RPID%3dhttps%253a%252f%252fsso.adatum.com%26RelayState%3dhttps%253a%252f%252fwebapp.adatum.com

Step 4: The final step is to take the final string and append it to the IDP initiated sign on URL.

IDP initiated sign on URL:
https://adfs.contoso.com/adfs/ls/idpinitiatedsignon.aspx

Final URL:
https://adfs.contoso.com/adfs/ls/idpinitiatedsignon.aspx?RelayState=RPID%3dhttps%253a%252f%252fsso.adatum.com%26RelayState%3dhttps%253a%252f%252fwebapp.adatum.com

The result is an IDP initiated sign on URL that tells AD FS which relying party STS the login is for, and also gives that relying party information that it can use to direct the user to the correct application.

image

Is there an easier way?

The multi-step process and manual manipulation of the strings are prone to human error which can cause confusion and frustration. Using a simple HTML file, we can fill out the starting information into a form and click the Generate URL button.

image

The code sample for this HTML file has been posted to CodePlex.

Conclusion and Links

I hope this post has helped demystify RelayState and will have everyone up and running quickly.

AD FS 2.0 RelayState Generator
http://social.technet.microsoft.com/wiki/contents/articles/13172.ad-fs-2-0-relaystate-generator.aspx
HTML Download
https://adfsrelaystate.codeplex.com/

AD FS 2.0 Rollup 2
http://support.microsoft.com/kb/2681584

Supporting Identity Provider Initiated RelayState
http://technet.microsoft.com/en-us/library/jj127245(WS.10).aspx

Joji "Halt! Who goes there!" Oshima

So long and thanks for all the fish

$
0
0

My time is up.

It’s been eight years since a friend suggested I join him on a contract at Microsoft Support (thanks Pete). Eight years since I sat sweating in an interview with Steve Taylor, trying desperately to recall the KDC’s listening port (his hint: “German anti-tank gun”). Eight years since I joined 35 new colleagues in a training room and found that despite my opinion, I knew nothing about Active Directory (“Replication of Absent Linked Object References– what the hell have I gotten myself into?”).

Eight years later, I’m a Senior Support Escalation Engineer, a blogger of some repute, and a seasoned world traveler who instructs other ‘softies about Windows releases. I’ve created thousands of pages of content and been involved in countless support cases and customer conversations. I am the last of those 35 colleagues still here, but there is proof of my existence even so. It’s been the most satisfactory work of my career.

Just the thought of leaving was scary enough to give me pause – it’s been so long since I knew anything but supporting Windows. It’s a once in a lifetime opportunity though and sometimes you need to reset your career. Now I’ll help create the next generations of Windows Server and the buck will finally stop with me: I’ve been hired as a Program Manager and am on my way to Seattle next week. I’m not leaving Microsoft, just starting a new phase. A phase with a lot more product development, design responsibility, and… meetings. Soooo many meetings.

There are two types of folks I am going to miss: the first are workmates. Many are support engineers, but also PFEs, Consultants, and TAMs. Even foreigners! Interesting and funny people fill Premier and Commercial Technical Support and make every day here enjoyable, even after the occasional customer assault. There’s nothing like a work environment where you really like your colleagues. I’ve sat next to Dave Fisher since 2004 and he’s made me laugh every single day. He is a brilliant weirdo, like so many other great people here. You all know who you are.

The other folks are… you. Your comments stayed thought provoking and fresh for five years and 700 posts. Your emails kept me knee deep in mail sacks and articles (I had to learn in order to answer many of them). Your readership has made AskDS into one of the most popular blogs in Microsoft. You unknowingly played an immense part in my career, forcing me to improve my communication; there’s nothing like a few hundred thousand readers to make you learn your craft.

My time as the so-called “editor in chief” of AskDS is over, but I imagine you will still find me on the Internet in my new role, yammering about things that I think you’ll find interesting. I also have a few posts in the chamber that Jonathan or Mike will unload after I’m gone, and they will keep the site going. AskDS will continue to be a place for unvarnished support information about Windows technologies, where your questions will get answers.

Thanks for everything, and see you again soon.

image
We are looking forward to Seattle’s famous mud puddles

 

- Ned “42” Pyle

Digging a little deeper into Windows 8 Primary Computer

$
0
0

[This is a ghost of Ned past article – Editor]

Hi folks, Ned here again to talk more about the Primary Computer feature introduced in Windows 8. Sharp-eyed readers may have noticed this lonely beta blog post and if you just want a set-by-step guide to enabling this feature, TechNet does it best. Today I am going to fill in some blanks and make sure the feature's architecture and usefulness is clear. At least, I'm going to try.

Onward!

Backgrounder and Requirements

Businesses using Roaming User Profiles, Offline Files and Folder Redirection have historically been limited in controlling which computers cache user data. For instance, while there are group policies to assign roaming profiles on a per computer basis, they affect all users of that computer and are useless if youassign roaming profiles through legacy user attributes.

Windows 8 introduces a pair of new per-user AD DS attributes to specify a "primary computer." The primary computer is the one directly assigned to a user - such as their laptop, or a desktop in their cubicle - and therefore unlikely to change frequently. We refer to this as "User-Device Affinity". That computer will allow them to store roaming user data or access redirected folder data, as well as allow caching of redirected data through offline files. There are three main benefits to using Primary Computer:

  1. When a user is at a kiosk, using a conference room PC, or connecting to the network from a home computer, there is no risk that confidential user data will cache locally and be accessible offline. This adds a measure of security.
  2. Unlike previous operating systems, an administrator now has the ability to control computers that will not cache data, regardless of the user's AD DS profile configuration settings.
  3. The initial download of a profile has a noticeable impact on logon performance; a brand new Windows 8 user profile is ~68MB in size, and that's before it's filled with "Winter is coming" meme pics. Since a roaming profile and folder redirection no longer synchronously cache data on the computer during logon, a user connecting from a temporary or home machine logs on considerably faster.

By assigning computer(s) to a user then applying some group policies, you ensure data only roams or caches where you want it.


Yoink, stolen screenshot from a much better artist

Primary Computer has the following requirements:

  • Windows 8 or Windows Server 2012 computers used for interactive logon
  • Windows Server 2012 AD DS Schema (but not necessarily Win2012 DCs)
  • Group Policy managed from Windows 8 or Windows Server 2012 GPMC
  • Some mechanism to determine each user's primary computer(s)

Determining Primary Computers

There is no attribute in Active Directory that tracks which computers a user logs on to, much less the computers they log on to the most frequently. There are a number of out of band options to determine computer usage though:

  • System Center Configuration Manager - SCCM has built in functionality to determine the primary users of computers, as part of its "Asset Intelligence" reporting. You can read more about this feature in SCCM 2012 and 2007 R2. This is the recommended method as it's the most comprehensive and because I like money.
  • Collecting 4624 events - the Security event log Logon Event 4624 with a Logon Type 2 delineates where a user logged on interactively. By collecting these events using some type of audit collection service or event forwarding, you can build up a picture of which users are logging on to which computers repeatedly.

     

     

  • Logon Script– If you're the fancy type, you can create a logon script that writes a user's computer to a centralized location, such as on their own AD object. If you grant inherited access for SELF to update (for instance) the Comment attribute on all the user objects, each user could use that attribute as storage. Then you can collect the results for a few weeks and create a list of computer usage by user.

    For example, this rather hokey illustration VBS runs as a logon script and updates a user's own Comment attribute with their computer's distinguished name, only if it has changed from the previous value:

    Set objSysInfo = CreateObject("ADSystemInfo")

    Set objUser = GetObject("LDAP://" & objSysInfo.UserName)

    Set objComputer = GetObject("LDAP://" & objSysInfo.ComputerName)

     

    strMessage = objComputer.distinguishedName

    if objUser.Comment = StrMessage then wscript.quit

     

    objUser.Comment = strMessage

    objUser.SetInfo

    

A user may have more than one computer they logon to regularly though and if that's the case, an AD attribute-based storage solution is probably not the right answer unless the script builds a circular list with a restricted number of entries and logic to ensure it does not update with redundant data. Otherwise, there could be excessive AD replication. Remember, this is just a simple example to get the creative juices flowing.

  • PsLoggedOn - you can script and run PsLoggedOn.exe (a Windows Sysinternals tool) periodically during the day for all computers over the course of several weeks. That would build, over time, a list of which users frequent which computers. This requires remote registry access through the Windows Firewall.
  • Third parties - there are SCCM/SCOM-like vendors providing this functionality. I don't have details but I'm sure they have a salesman who wants a new German sports sedan and will be happy to bend your ear.

Setting the Primary Computer

As I mentioned before, look at TechNet for some DSAC step-by-step for setting the msDS-PrimaryComputer attribute and the necessary group policies. However, if you want to use native Windows PowerShell instead of our interesting out of band module, here are some more juice-flow inducing samples.

The ActiveDirectory Windows PowerShell module get-adcomputer and set-aduser cmdlets allow you to easily retrieve a computer's distinguished name and assign it to the user's primary computer attribute. You can use assigned variables for readability, or with nested functions for simplicity.

Variable

<$variable> = get-adcomputer <computer name>

Set-aduser <user name> -add @{'msDS-PrimaryComputer'="<$variable>"}

For example, with a computer named cli1 and a user name stduser:

Nested

Set-aduser <user name> -add @{'msDS-PrimaryComputer'=(get-adcomputer <computer name>).distinguishedname}

For example, with that same user and computer:

Other techniques

If you use AD DS to store the user's last computer in their Comment attribute as part of a logon script - like described in the earlier section - here is an example that reads the stduser attribute Comment and assigns primary computer based on the contents:

If you wanted to assign primary computers to all of the users within the Foo OU based on their comment attributes, you could use this example:

If you have a CSV file that contains the user accounts and their assigned computers as DNs, you can use the import-csv cmdlet to update the users. For example:

This is particularly useful when you have some asset history and assign certain users specific computers. Certainly a good idea for insurance and theft prevention purposes, regardless.

Cached Data Clearing GP

Enabling Primary Computer does not remove any data already cached on other computers that a user does not access again. I.e. if a user was already using Roaming User Profiles or Folder Redirection (which, by default, automatically adds all redirected shell folders to the Offline Files cache), enabling Primary Computer means only that further data is not copied locally to non-approved computers.

In the case of Roaming User Profiles, several policies can clear data from computers at logoff or restart:

  • Delete user profiles older than a specified number of days on system restart - this deletes unused profiles after N days when a computer reboots
  • Delete cached copies of roaming profiles - this removes locally saved roaming profiles once a user logs off. This policy would also apply to Primary Computers and should be used with caution

In the case of Folder Redirection and Offline Files, there is no specific policy to clear out stale data or delete cached data at logoff like there is for RUP, but that's immaterial:

  • When a computer needs to remove FR after to becoming "non-primary" - due to the primary computer feature either being enabled or the machine being removed from the primary computer list for the user - the removal behavior will depend on how the FR policy is configured to behave on removal. It can be configured to either:
    • Redirect the folder back to the local profile– the folder location sets back to the default location in the user's profile (e.g., c:\users\%USERNAME%\Documents), the data copies from the file server to the local profile, and the file server location is unpinned from the computer's Offline Files cache
    • Leave the folder pointing to the file server–the folder location still points to the file server location, but the contents are unpinned from the computer's Offline Files cache. The folder configuration is no longer controlled through policy

In both cases, once the data is unpinned from the Offline Files cache, it will evict from the computer in the background after 15 minutes.

Logging Primary Computer Usage

To see that the Download roaming profiles on primary computers only policy took effect and the behavior at each user logon, examine the User Profile Service operational event log for Event 63. This will state either "This computer is a primary computer for this user" or "This computer is not a primary computer for this user":

The new User Profile Service events for Primary Computer are all in the Operational event log:

Event ID

62

Severity

Warning

Message

Windows was unable to successfully evaluate whether this computer is a primary computer for this user. This may be due to failing to access the Active Directory server at this time. The user's roaming profile will be applied as configured. Contact the Administrator for more assistance. Error: %1

Notes and resolution

Indicates an issue contacting LDAP on a domain controller. Examine the extended error, examine System and Application event logs for further details, consider getting a network capture if still unclear

 

Event ID

63

Severity

Informational

Message

This computer %1 a primary computer for this user

Notes and resolution

This event's variable will change from "IS" to "IS NOT" depending on circumstances. It is not an error condition unless this is unexpected to the administrator. A customer should interrogate the rest of the IT staff on the network if not expecting to see these events

 

Event ID

64

Severity

Informational

Message

The primary computer relationship for this computer and this user was not evaluated due to %1

Notes and resolution

Examine the extended error for details.

 

To see that the Redirect folders on primary computers only policy took effect and the behavior at each user logon, examine the Folder Redirection operational event log for Event 1010. This will state "This computer is not a primary computer for this user" or if it is (good catch, Johan from Comments)

Architecture

Windows 8 implements Primary Computer through two new AD DS attributes in the Windows Server 2012 (version 56) Schema.

Primary Computer is a client-side feature; no matter what you configure in Active Directory or group policy on domain controllers, Windows 7, Windows Server 2008 R2, and older family computers will not obey the settings.

AD DS Schema

Attribute

Explanation

msDS-PrimaryComputer

The primary computers assigned to a user or a security group containing users. Contains a multi-valued linked-value distinguished names that references the msDS-isPrimaryComputerFor backlink on a computer object

msDS-isPrimaryComputerFor

The users assigned to a computer account. Contains a multi-valued linked-value distinguished names that references the msDS-PrimaryComputer forward link on a user object

 

Processing

The processing of this new functionality is:

  1. Look at Group Policy setting to determine if the msDS-PrimaryComputer attribute in Active Directory should influence the decision to roam the user's profile or apply Folder Redirection.
  2. If step 1 is TRUE, initialize an LDAP connection and bind to a domain controller
  3. Check for the required schema version
  4. Query for the "msDS-IsPrimaryComputerFor" attribute on the AD object representing the current computer
  5. Check to see if the current user is in the list returned by this attribute or in the group returned by this attribute and if so, return TRUE for IsPrimaryComputerForUser. If no match is found, return FALSE for IsPrimaryComputerForUser
  6. If step 5 is FALSE:
    1. For RUP, an existing cached local profile should be used if present. If there is no local profile for the user, a new local profile should be created
    2. For FR, if Folder Redirection previously applied, the Folder Redirection configuration removes according to the removal action specified by the previously applied policy (this is retained in the local FR configuration). If there is no current FR configuration, there is no work to be done

Troubleshooting

Because this feature is both new and simple, most troubleshooting is likely to follow this basic workflow when Primary Computer is not working as expected:

  1. User assigned the correct computer distinguished name (or in the security group assigned the computer DN)
  2. AD DS replication has converged for the user and computer objects
  3. AD DS and SYSVOL replication has converged for the Primary Computer group policies
  4. Primary Computer group policies applying to the computer
  5. User has logged off and on since the Primary Computer policies applied

The logs of note for troubleshooting Primary Computer are:

Log

Notes and Explanation

Gpresult/GPMC RSoP Report

Validates that Primary Computer policy is applying to the computer or user

Group Policy operational Event log

Validates that group policy in general is applying to the computer or user with specific details

System Event Log

Validates that group policy in general is applying to the computer or user with generalities

Application Event log

Validates that Folder Redirection and Roaming User Profiles are working with generalities and specific details

Folder Redirection operational event log

Validates that Folder Redirection is working with specific details

User Profile Service operational event log

Validates that Roaming User Profile is working with specific details

Fdeploy.log

Validates that Folder Redirection is working with specific details

 

Cases reported by your users or help desk as Primary Computer processing issues are more likely to be AD DS replication, SYSVOL replication, group policy, folder redirection, or roaming user profile issues. Determine immediately if Primary Computer is at all to blame, then move on to the more likely historical culprits. Watch for red herrings!

Likewise, your company may not be internally aware of Primary Computer deployments and may send you down a rat hole troubleshooting expected behavior. Always ensure that a "problem" with folder redirection or roaming user profiles isn't just another group within the customer's company configuring Primary Computer and not telling you (this applies to you too; send a memo, dangit!).

Have fun.

Ned "shouldn't we have called it 'Primary Computers?'" Pyle

....And knowing is half the battle!

ADAMSync 101

$
0
0

Hi Everyone, Kim Nichols here again, and this time I have an introduction to ADAMSync. I take a lot of cases on ADAM and AD LDS and have seen a number of problems arise from less than optimally configured ADAMSync XML files. There are many sources of information on ADAM/AD LDS and ADAMSync (I'll include links at the end), but I still receive lots of questions and cases on configuring ADAM/AD LDS for ADAMSync.

We'll start at the beginning and talk about what ADAM/AD LDS is, what ADAMSync is and then finally how you can get AD LDS and ADAMSync working in your environment.

What is ADAM/AD LDS?

ADAM (Active Directory Application Mode) is the 2003 name for AD LDS (Active Directory Lightweight Directory Services). AD LDS is, as the name describes, a lightweight version of Active Directory. It gives you the capabilities of a multi-master LDAP directory that supports replication without some of the extraneous features of an Active Directory domain controller (domains and forests, Kerberos, trusts, etc.). AD LDS is used in situations where you need an LDAP directory but don't want the administration overhead of AD. Usually it's used with web applications or SQL databases for authentication. Its schema can also be fully customized without impacting the AD schema.

AD LDS uses the concept of instances, similar to that of instances in SQL. What this means is one AD LDS server can run multiple AD LDS instances (databases). This is another differentiator from Active Directory: a domain controller can only be a domain controller for one domain. In AD LDS, each instance runs on a different set of ports. The default instance of AD LDS listens on 389 (similar to AD).

Here's some more information on AD LDS if you're new to it:

What is ADAMSync?

In many scenarios, you may want to store user data in AD LDS that you can't or don't want to store in AD. Your application will point to the AD LDS instance for this data, but you probably don't want to manually create all of these users in AD LDS when they already exist in AD. If you have Forefront Identity Manager (FIM), you can use it to synchronize the users from AD into AD LDS and then manually populate the AD LDS specific attributes through LDP, ADSIEdit, or a custom or 3rd party application. If you don't have FIM, however, you can use ADAMSync to synchronize data from your Active Directory to AD LDS.

It is important to remember that ADAMSync DOES NOT synchronize user passwords! If you want the AD LDS user account to use the same password as the AD user, then userproxy transformation is what you need. (That's a topic for another day, though. I'll include links at the end for userproxy.)

ADAMSync uses an XML file that defines which data will synchronize from AD to AD LDS. The XML file includes the AD partition from which to synchronize, the object types (classes or categories), and attributes to synchronize. This file is loaded into the AD LDS database and used during ADAMSync synchronization. Every time you make changes to the XML file, you must reload the XML file into the database.

In order for ADAMSync to work:

  1. The MS-AdamSyncMetadata.LDF file must be imported into the schema of the AD LDS instance prior to attempting to install the XML file. This LDF creates the classes and attributes for storing the ADAMSync.xml file.
  2. The schema of the AD LDS instance must already contain all of the object classes and attributes that you will be syncing from AD to AD LDS. In other words, you can't sync a user object from AD to AD LDS unless the AD LDS schema contains the User class and all of the attributes that you specify in the ADAMSync XML (we'll talk more about this next). There is a blog post on using ADSchemaAnalyzer to compare the AD schema to the AD LDS schema and export the differences to an LDF file that can be imported into AD LDS.
  3. Unless you plan on modifying the schema of the AD LDS instance, your instance should be named DC=<partition name>, DC=<com or local or whatever> and not CN=<partition name>. Unfortunately, the example in the AD LDS setup wizard uses CN= for the partition name.  If you are going to be using ADAMSync, you should disregard that example and use DC= instead.  The reason behind this change is that the default schema does not allow an organizationalUnit (OU) object to have a parent object of the Container (CN) class. Since you will be synchronizing OUs from AD to AD LDS and they will need to be child objects of your application partition head, you will run into problems if your application partition is named CN=.




    Obviously, this limitation is something you can change in the AD LDS schema, but simply naming your partition with DC= name component will eliminate the need to make such a change. In addition, you won't have to remember that you made a change to the schema in the future.

The best advice I can give regarding ADAMSync is to keep it as simple as possible to start off with. The goal should be to get a basic XML file that you know will work, gradually add attributes to it, and troubleshoot issues one at a time. If you try to do too much (too wide of object filter or too many attributes) in the XML from the beginning, you will likely run into multiple issues and not know where to begin in troubleshooting.

KEEP IT SIMPLE!!!

MS-AdamSyncConf.xml

Let's take a look at the default XML file that Microsoft provides and go through some recommendations to make it more efficient and less prone to issues. The file is named MS-AdamSyncConf.XML and is typically located in the %windir%\ADAM directory.

<?xml version="1.0"?>
<doc>
<configuration>
<description>sample Adamsync configuration file</description>
<security-mode>object</security-mode>
<source-ad-name>fabrikam.com</source-ad-name> <------ 1
<source-ad-partition>dc=fabrikam,dc=com</source-ad-partition> <------ 2
<source-ad-account></source-ad-account> <------ 3
<account-domain></account-domain> <------ 4
<target-dn>dc=fabrikam,dc=com</target-dn> <------ 5
<query>
<base-dn>dc=fabrikam,dc=com</base-dn> <------ 6
<object-filter>(objectClass=*)</object-filter> <------ 7
<attributes> <------ 8
<include></include>
<exclude>extensionName</exclude>
<exclude>displayNamePrintable</exclude>
<exclude>flags</exclude
<exclude>isPrivelegeHolder</exclude>
<exclude>msCom-UserLink</exclude>
<exclude>msCom-PartitionSetLink</exclude>
<exclude>reports</exclude>
<exclude>serviceprincipalname</exclude>
<exclude>accountExpires</exclude>
<exclude>adminCount</exclude>
<exclude>primarygroupid</exclude>
<exclude>userAccountControl</exclude>
<exclude>codePage</exclude>
<exclude>countryCode</exclude>
<exclude>logonhours</exclude>
<exclude>lockoutTime</exclude>
</attributes>
</query>
<schedule>
<aging>
<frequency>0</frequency>
<num-objects>0</num-objects>
</aging>
<schtasks-cmd></schtasks-cmd>
</schedule> <------ 9
</configuration>
<synchronizer-state>
<dirsync-cookie></dirsync-cookie>
<status></status>
<authoritative-adam-instance></authoritative-adam-instance>
<configuration-file-guid></configuration-file-guid>
<last-sync-attempt-time></last-sync-attempt-time>
<last-sync-success-time></last-sync-success-time>
<last-sync-error-time></last-sync-error-time>
<last-sync-error-string></last-sync-error-string>
<consecutive-sync-failures></consecutive-sync-failures>
<user-credentials></user-credentials>
<runs-since-last-object-update></runs-since-last-object-update>
<runs-since-last-full-sync></runs-since-last-full-sync>
</synchronizer-state>
</doc>

Let's go through the default XML file by number and talk about what each section does, why the defaults are what they are, and what I typically recommend when working with customers.

  1. <source-ad-name>fabrikam.com</source-ad-name> 

    Replace fabrikam.com with the FQDN of the domain/forest that will be your synchronization source

  2. <source-ad-partition>dc=fabrikam,dc=com</source-ad-partition> 

    Replace dc=fabrikam,dc=com with the DN of the AD partition that will be the source for the synchronization

  3. <source-ad-account></source-ad-account> 

    Contains the account that will be used to authenticate to the source forest/domain. If left empty, the credentials of the logged on user will be used

  4. <account-domain></account-domain> 

    Contains the domain name to use for authentication to the source domain/forest. This element combined with <source-ad-account> make up the domain\username that will be used to authenticate to the source domain/forest. If left empty, the domain of the logged on user will be used.

  5. <target-dn>dc=fabrikam,dc=com</target-dn>

    Replace dc=fabrikam,dc=com with the DN of the AD LDS partition you will be synchronizing to.

    NOTE: In 2003 ADAM, you were able to specify a sub-ou or container of the of the ADAM partition, for instance OU=accounts,dc=fabrikam,dc=com. This is not possible in 2008+ AD LDS. You must specify the head of the partition, dc=fabrikam,dc=com. This is publicly documented here.

  6. <base-dn>dc=fabrikam,dc=com</base-dn>

    Replace dc=fabrikam,dc=com with the base DN of the container in AD that you want to synchronize objects from.

    NOTE: You can specify multiple base DNs in the XML file, but it is important to note that due to the way the dirsync engine works the entire directory will still be scanned during synchronization. This can lead to unexpectedly long synchronization times and output in the adamsync.log file that is confusing. The short of this this is that even though you are limiting where to synchronize objects from, it doesn't reduce your synchronization time and you will see entries in the adamsync.log file that indicate objects being processed but not written. This can make it appear as though ADAMSync is not working correctly if your directory is large but you are syncing is a small percentage of the directory. Also, the log will grow and grow, but it may take a long time for objects to begin to appear in AD LDS. This is because the entire directory is being enumerated, but only a portion is being synchronized.

  7. <object-filter>(objectClass=*)</object-filter>

    The object filter determines which objects will be synchronized from AD to AD LDS. While objectClass=* will get you everything, do you really want or need EVERYTHING? Consider the amount of data you will be syncing and the security implications of having everything duplicated in AD LDS. If you only care about user objects, then don't sync computers and groups.

    The filter that I generally recommend as a starting point is:

    (&#124;(objectCategory=Person)(objectCategory=OrganizationalUnit))

    Rather than objectClass=User, I recommend objectCategory=Person. But, why, you ask? I'll tell you :-) If you've ever looked that the class of a computer object, you'll notice that it contains an objectClass of user.



    What this means to ADAMSync is that if I specify an object filter of objectClass=user, ADAMSync will synchronize users and computers (and contact objects and anything else that inherits from the User class). However, if I use objectCategory=Person, I only get actual user objects. Pretty neat, eh?

    So, what does this &#124; mean and why include objectCategory=OrganizationalUnit? The literal &#124; is the XML representation of the | (pipe) character which represents a logical OR. True, I've seen customers just use the | character in the XML file and not have issues, but I always use the XML rather than the | just to be certain that it gets translated properly when loaded into the AD LDS instance. If you need to use an AND rather than an OR, the XML for & is &amp;.

    You need objectCategory=OrganizationalUnit so that objects that are moved within AD get synchronized properly to AD LDS. If you don't specify this, the OUs that contain objects within scope of the object filter will be created on the initial creation of the object in AD LDS. But, if that object is ever MOVED in the source AD, ADAMSync won't be able to synchronize that object to the new location. Moving an object changes the full DN of the object. Since we aren't syncing the OUs the object just "disappears" from an ADAMSync perspective and never gets updated/moved.

    If you need groups to be synchronized as well you can add (objectclass=group) inside the outer parentheses and groups will also be synced.

    (&#124;(objectCategory=Person)(objectCategory=OrganizationalUnit)(objectClass=Group))

  8. <attributes>

    The attributes section is where you define which attributes to synchronize for the object types defined in the <object-filter>.

    You can either use the <include></include> or <exclude></exclude> tabs, but you cannot use both.

    The default XML file provided by Microsoft takes the high ground and uses the <exclude></exclude> tags which really means include all attributes except the ones that are explicitly defined within the <exclude></exclude> element. While this approach guarantees that you don't miss anything important, it can also lead to a lot of headaches in troubleshooting.

    If you've ever looked at an AD user account in ADSIEdit (especially in an environment with Exchange), you'll notice there are hundreds of attributes defined. Keeping to my earlier advice of "keep it simple", every attribute you sync adds to the complexity.

    When you use the <exclude></exclude> tags you don't know what you are syncing; you only know what you are not syncing. If your application isn't going to use the attribute then there is no reason to copy that data to AD LDS. Additionally, there are some attributes and classes that just won't sync due to how the dirsync engine works. I'll include the list as I know it at the end of the article. Every environment is different in terms of which schema updates have been made and which attributes are being used. Also, as I mentioned earlier, if your AD LDS schema does not contain the object classes and attributes that you have defined in your ADAMSync XML file you're your synchronization will die in a big blazing ball of flame.


    Whoosh!!

    A typical attributes section to start out with is something like this:

    <include>objectSID</include> <----- only needed for userproxy
    <include>userPrincipalName</include> <----- must be unique in AD LDS instance
    <include>displayName</include>
    <include>givenName</include>
    <include>sn</include>
    <include>physicalDeliveryOfficeName</include>
    <include>telephoneNumber</include>
    <include>mail</include>
    <include>title</include>
    <include>department</include>
    <include>manager</include>
    <include>mobile</include>
    <include>ipPhone</include>
    <exclude></exclude>

    Initially, you may even want to remove userPrincipalName, just to verify that you can get a sync to complete successfully. Synchronization issues caused by the userPrincipalName attribute are among the most common ADAMSync issues I see. Active Directory allows multiple accounts to have the same userPrincipalName, but ADAMSync will not sync an object if it has the same userPrincipalName of an object that already exists in the AD LDS database.

    If you want to be a superhero and find duplicate UPNs in your AD before you attempt ADAMSync, here's a nifty csvde command that will generate a comma-delimited file that you can run through Excel's "Highlight duplicates" formatting options (or a script if you are a SUPER-SUPERHERO) to find the duplicates.

    csvde -f upn.csv -s localhost:389 -p subtree -d "DC=fabrikam,DC=com" -r "(objectClass=user)" -l sAMAccountName,userPrincipalName

    Remember, you are targeting your AD with this command, so the localhost:389 implies that the command is being run on the DC. You'll need to replace "DC=fabrikam, DC=com" with your AD domain's DN.

  9. </schedule>

    After </schedule> is where you would insert the elements to do user proxy transformation. In the References section, I've included links that explain the purpose and configuration of userproxy. The short version is that you can use this section of code to create userproxy objects rather than AD LDS user class objects. Userproxy objects are a special class of user that links back to an Active Directory domain account to allow the AD LDS user to utilize the password of their corresponding user account in AD. It is NOT a way to logon on to AD from an external network. It is a way to allow an application that utilizes AD LDS as its LDAP directory to authenticate a user via the same password they have in AD. Communication between AD and AD LDS is required for this to work and the application that is requesting the authentication does not receive a Kerberos ticket for the user.

    Here is an example of what you would put after </schedule> and before </configuration>

    <user-proxy>
    <source-object-class>user</source-object-class>
    <target-object-class>userProxyFull</target-object-class>
    </user-proxy>

Installing the XML file

OK! That was fun, wasn't it? Now that we have an XML file, how do we use it? This is covered in a lot of different materials, but the short version is we have to install it into the AD LDS instance. To install the file, run the following command from the ADAM installation directory (%windir%\ADAM):

Adamsync /install localhost:389 CustomAdamsync.xml

The command above assumes you are running it on the AD LDS server, that the instance is running on port 389 and that the XML file is located in the path of the adamsync command.

What does this do exactly, you ask? The adamsync install command copies the XML file contents into the configurationFile attribute on the AD LDS application partition head. You can view the attribute by connecting to the application partition via LDP or through ADSIEdit. This is a handy thing to know. You can use this to verify for certain exactly what is configured in the instance. Often there are several versions of the XML file in the ADAM directory and it can be difficult to know which one is being used. Checking the configurationFile attribute will tell you exactly what is configured. It won't tell you which XML file was used, but at least you will know the configuration.

The implication of this is that anytime you update the XML file you must reinstall it using the adamsync /install command otherwise the version in the instance is not updated. I've made this mistake a number of times during troubleshooting!

Synchronizing with AD

Finally, we are ready to synchronize! Running the synchronization is the "easy" part assuming we've created a valid XML file, our AD LDS schema has all the necessary classes and attributes, and the source AD data is without issue (duplicate UPN is an example of a known issue).

From the ADAM directory (typically %windir%\ADAM), run the following command:

Adamsync /sync localhost:389 "DC=fabrikam,DC=com" /log adamsync.log

Again, we're assuming you are running the command on the AD LDS server and that the instance is running on port 389. The DN referenced in the command is the DN of your AD LDS application partition. /log is very important (you can name the log anything you want). You will need this log if there are any issues during the synchronization. The log will tell you which object failed and give you a cryptic "detailed" reason as to why. Below is an example of an error due to a duplicate UPN. This is one of the easier ones to understand.

====================================================
Processing Entry: Page 67, Frame 1, Entry 64, Count 1, USN 0
Processing source entry <guid=fe36238b9dd27a45b96304ea820c82d8>
Processing in-scope entry fe36238b9dd27a45b96304ea820c82d8.

Adding target object CN=BillyJoeBob,OU=User Accounts,dc=fabrikam,dc=com. Adding attributes: sourceobjectguid, objectClass, sn, description, givenName, instanceType, displayName, department, sAMAccountName, userPrincipalName, Ldap error occurred. ldap_add_sW: Attribute Or Value Exists. Extended Info: 0000217B: AtrErr: DSID-03050758, #1:
0: 0000217B: DSID-03050758, problem 1006 (ATT_OR_VALUE_EXISTS), data 0, Att 90290 (userPrincipalName)

. Ldap error occurred. ldap_add_sW: Attribute Or Value Exists. Extended Info: 0000217B: AtrErr: DSID-03050758, #1:
0: 0000217B: DSID-03050758, problem 1006 (ATT_OR_VALUE_EXISTS), data 0, Att 90290 (userPrincipalName)
===============================================

During the sync, if you are syncing from the Active Directory domain head rather than an OU or container, your objects should begin showing up in the AD LDS instance almost immediately. The objects don't synchronize in any order that makes sense to the human brain, so don't worry if objects are appearing in a random order. There is no progress bar or indication of how the sync is going other than fact that the log file is growing. When the sync completes you will be returned to the command prompt and your log file will stop growing.

Did it work?

As you can see there is nothing on the command line nor are there any events in any Windows event log that indicate that the synchronization was successful. In this context, successful means completed without errors and all objects in scope, as defined in the XML file, were synchronized. The only way to determine if the synchronization was successful is to check the log file. This highlights the importance of generating the log. Additionally, it's a good idea to keep a reasonable number of past logs so if the sync starts failing at some point you can determine approximately when it started occurring. Management likes to know things like this.

Since you'll probably be automating the synchronization (easy to do with a scheduled task) and not running it manually, it's a good idea to set up a reminder to periodically check the logs for issues. If you've never looked at a log before, it can be a little intimidating if there are a lot of objects being synchronized. The important thing to know is that if the sync was successful, the bottom of the log will contain a section similar to the one below:

Updating the configuration file DirSync cookie with a new value.

Beginning processing of deferred dn references.
Finished processing of deferred dn references.

Finished (successful) synchronization run.
Number of entries processed via dirSync: 16
Number of entries processed via ldap: 0
Processing took 4 seconds (0, 0).
Number of object additions: 3
Number of object modifications: 13
Number of object deletions: 0
Number of object renames: 2
Number of references processed / dropped: 0, 0
Maximum number of attributes seen on a single object: 9
Maximum number of values retrieved via range syntax: 0

Beginning aging run.
Aging requested every 0 runs. We last aged 2 runs ago.
Saving Configuration File on DC=instance1,DC=local
Saved configuration file.

If your log just stops without a section similar to the one above, then the last entry will indicate an error similar to the one above for the duplicate UPN.

Conclusion and other References

That covers the basics of setting up ADAMSync! I hope this information makes the process more straight forward and gives you some tips for getting it to work the first time! The most important point I can make is to start very simple with the XML file and get something to work. You can always add more attributes to the file later, but if you start from broken it can be difficult to troubleshoot. Also, I highly recommend using <include> over <exclude> when specifying attributes to synchronize. This may be more work for your application team since they will have to know what their application requires, but it will make setting up the XML file and getting a successful synchronization much easier!

ADAMSync excluded objects

As I mentioned earlier, there are some attributes, classes and object types that ADAMSync will not synchronize. The items listed below are hard-coded not to sync. There is no way around this using ADAMSync. If you need any of these items to sync, then you will need to use LDIFDE exports, FIM, or some other method to synchronize them from AD to AD LDS. The scenarios where you would require any of these items are very limited and some of them are dealt with within ADAMSync by converting the attribute to a new attribute name (objectGUID to sourceObjectGUID).

Attributes

cn, currentValue, dBCSPwd, fSMORoleOwner, initialAuthIncoming, initialAuthOutgoing, isCriticalSystemObject, isDeleted, lastLogonTimeStamp, lmPwdHistory, msDS-ExecuteScriptPassword, ntPwdHistory, nTSecurityDescriptor, objectCategory, objectSid (except when being converted to proxy), parentGUID, priorValue, pwdLastSet, sAMAccountType, sIDHistory, supplementalCredentials, supplementalCredentials, systemFlags, trustAuthIncoming, trustAuthOutgoing, unicodePwd, whenChanged

Classes

crossRef, secret, trustedDomain, foreignSecurityPrincipal, rIDSet, rIDManager

Other

Naming Context heads, deleted objects, empty attributes, attributes we do not have permissions to read, objectGUIDs (gets transferred to sourceObjectGUID), objects with del-mangeled distinguished names (DEL:\)

Additional Goodies

ADAMSync

AD LDS Replication

Misc Blogs

GOOD LUCK and ENJOY!

Kim "Sync or swim" Nichols


Windows Server 2012 R2 - Preview available for download

$
0
0

Just in case you missed the announcement, the preview build of Windows Server 2012 R2 is now available for download.  If you want to see the latest and greatest, head on over there and take a gander at the new features.  All of us here in support have skin in this game, but Directory Services (us) has several new features that we'll be talking about over the coming months.  Including a lot of this stuff named in the announcement:

"Empowering employee productivity– Windows Server Work Folders, Web App Proxy, improvements to Active Directory Federation Services and other technologies will help companies give their employees consistent access to company resources on the device of their choice."

Obviously this is still a beta release.  Things can change before RTM.  Don't go doing anything silly like deploying this in production - it's officially unsupported at this stage, and for testing purposes only.  But with all that in mind, give it a whirl, and hit the TechNet forums to provide feedback and ask questions.  You will also want to keep an eye on some of our server and tools blogs in the near future.  For your convenience, a bunch of those are linked in the bar up top for you.

Happy previewing!

--David "Town Crier" Beach

Interesting findings on SETSPN -x -f

$
0
0

Hello folks, this is Herbert from the Directory Services support team in Europe!

Kerberos is becoming increasingly mandatory for really cool features such as Protocol Transition.  Moreover, as you might be painfully aware, managing Service Principal Names (SPN’s) for the use of Kerberos by applications can be daunting at times.

In this blog, we will not be going into the gory details of SPNs and how applications are using them. In fact, I’m assuming you already have some basic knowledge about SPN’s and how they are used.

Instead, we’re going to talk about an interesting behavior that can occur when an administrator is doing their due diligence managing SPN’s.  This behavior can arise when you are checking the status of the account the SPN is planned for, or when you are checking to see if the SPN that must be registered is already registered in the domain or forest.

As we all know, the KDC’s cannot issue tickets for a particular service if there are duplicate SPN’s, and authentication does not work if the SPN is on the wrong account.

Experienced administrators learn to use the SETSPN utility to validate SPNs when authentication problems occur.  In the Windows Server 2008 version of SETSPN, we provide several options useful to identifying duplicate SPNs:

-      If you want to look for a duplicate of a particular SPN: SETSPN /q <SPN>

-      If you want to search for any duplicate in the domain: SETSPN /x

You can also use the “/f” option to extend the duplicate search to the whole Forest. Many Active Directory Admins use this as a proactive check of the forest for duplicate SPNs.

So far, so good…

The Problem

Sometimes, you’ll get an error running SETSPN -x -f:

c:\>SETSPN -X -F -P
Checking forest DC=contoso,DC=com
Operation will be performed forestwide, it might take a while.
Ldap Error(0x55 -- Timeout): ldap_get_next_page_s

“-P” just tells the tool not to clutter the output with progress indications, but you can see from that error message that we are not talking only about Kerberos anymore. There is a new problem.

 

What are we seeing in the diagnostic data?

In a network trace of the above you will see a query against the GC (port 3268) with no base DN and the filter (servicePrincipalName=*)”. SETSPN uses paged queries with a page size of 100 objects. In a large Active Directory environment this yields quite a number of pages.

If you look closely at network capture data, you’ll often find that Domain Controller response times slowly increase towards the end of the query. If the command completes, you’ll sometimes see that the delay is longest on the last page returned. For example, when we reviewed data for a recent customer case, we noted:

”Customer also noticed that it usually hangs on record 84.”

 

Troubleshooting LDAP performance and building custom queries calls for the use of the STATS Control. Here is how you use it in LDP.exe:

Once connected to port 3268 and logged on as an admin, you can build the query in the same manner as SETSPN does.

1. Launch LDP as an administrator.

2. Open the Search Window using Browse\Search or Ctrl-S.

3. Enter the empty base DN and the filter, specify “Subtree” as the scope. The list of attributes does not matter here. 

4. Go to Options:

 

5. Specify an “Extended” query as we want to use controls. Note I have specified a page size of 100 elements, but that is not important, as we will see later. Let’s move on to “Controls”:

 


5. From the List of Controls select “Search Stats“. When you select it, it automatically checks it in.

6. Now “OK” your way out of the “Controls” and “Options” dialogs.

7. Hit “Run” on the “Search” dialog.

 

You should get a large list of results, but also the STATS a bit like this one:

 

Call Time: 62198 (ms)

Entries Returned: 8508

Entries Visited: 43076

Used Filter: (servicePrincipalName=*)

Used Indices: idx_servicePrincipalName:13561:N

Pages Referenced          : 801521

Pages Read From Disk      : 259

Pages Pre-read From Disk  : 1578

Pages Dirtied             : 0

Pages Re-Dirtied          : 0

Log Records Generated     : 0

Log Record Bytes Generated: 0

 

What are these stats telling us?

We have a total of 8508 objects in the “Entries Returned” result set, but we have visited 43076 objects. That sounds odd, because we used an Index idx_servicePrincipalName”. This does not really look as if the query is using the index.

 

So what is happening here?

At this point, we experience the special behavior of multi-valued non-linked attributes and how they are represented in the index. To illustrate this, let me explain a few data points:

 

1. A typical workstation or member server has these SPNs:

servicePrincipalName:
WSMAN/herbertm5

servicePrincipalName:
WSMAN/herbertm5.europe.contoso.com

servicePrincipalName:
TERMSRV/herbertm5.europe.contoso.com

servicePrincipalName:
TERMSRV/HERBERTM5

servicePrincipalName:
RestrictedKrbHost/HERBERTM5

servicePrincipalName:
HOST/HERBERTM5

servicePrincipalName:
RestrictedKrbHost/HERBERTM5.europe.contoso.com

servicePrincipalName:
HOST/HERBERTM5.europe.contoso.com

 

2. When you look at the result set from running setspn, you notice that you’re not getting all of the SPNs you’d expect:

dn:CN=HQSCCM2K3TEST,OU=SCCM,OU=Test Infrastructure,OU=Domain Management,DC=contoso,DC=com
servicePrincipalName: WSMAN/sccm2k3test
servicePrincipalName: WSMAN/sccm2k3test.contoso.com

If you look at it closely, you notice all the SPN’s start with characters very much at the end of the alphabet, which also happens to be the end of the index. These entries do not have a prefix like “HOST”.

 

So how does this happen?

In the resultant set of LDAP queries, an object may only appear once, but it is possible for an object to be in the index multiple times, because of the way the index is built. Each time the object is found in the index, the LDAP Server has to check the other values of the indexed attribute of the object to see whether it also matches the filter and thus was already added to the result set.  The LDAP server is doing its diligence to avoid returning duplicates.

For example, the first hit in the index for the above workstation example is HOST/HERBERTM5“.

The second hitHOST/HERBERTM5.europe.contoso.com kicks off the algorithm.

The object is read already and the IO and CPU hit has happened.

Now the query keeps walking the index, and once it arrives at the prefix WSMAN”, the rate of objects it needs to skip approaches 100%. Therefore, it looks at many objects and little additional objects in the result set.

On the last page of the query, things get even worse. There is an almost 100% rate of duplicates, so the clock of 60 seconds SETSPN allows for the query is ticking, and there are only 8 objects to be found. If the Domain Controller has a slow CPU or the objects need to be read from the disk because of memory pressure, the SETSPN query will probably not finish within a minute for a large forest.  This results in the error Ldap Error(0x55 -- Timeout): ldap_get_next_page_sThe larger the index (meaning, the more computers and users you have in your forest), the greater the likelihood that this can occur.

If you run the query with LDIFDE, LDP or ADFIND you will have a better chance the query will be successful. This is because by default these tools do not specify a time-out and thus use the values of the Domain Controller LDAP Policy. The Domain Controller LDAP policy is 120 seconds (by default) instead of 60 seconds.

The problem with the results generated by these tools is that you have to correlate the results from the different outputs yourself – the tools won’t do it for you.

 

So what can you do about it?

Typically you’ll have to do further troubleshooting, but here are some common causes/resolutions that I’ve seen:

  1. If a domain controller is short on memory and encounters many cache misses and thus substantial IO. You can diagnose this using the NTDS performance counters in Performance Monitor.  You can add memory to reduce the IO rate and speed things up.
  2. If you are not experiencing memory pressure, the limiting factor could be the “Single-Thread-Performance” of the server. This is important as every LDAP query gets a worker thread and runs no faster than one logical CPU core can manage.  If you have a low number of logical cores in a system with a high amount of CPU activity, this can cause the threads to delay long enough for us to see an  nconsistent query return.  In this situation your best bet is to look for ways to reduce overall processor load on the domain controller – for example, moving other services off of the machine.
  3. There is an update for Windows Server 2012 which helps to avoid the problem:

2799960          Time-out error when you run SETSPN.exe in Windows 8 or Windows Server 2012

http://support.microsoft.com/kb/2799960/EN-US

The last customer I helped had a combination of issues 1 and 2 and once he chose a beefier DC with more current hardware, the command always succeeded.  Another customer had a much bigger environment and ended up using the update I listed above to overcome the issue.

I hope you have enjoyed this journey explaining what is happening on such a SETSPN query.

Cheers,

Herbert "The Thread Master" Mauerer

Because TechNet didn't have enough Active Directory awesomeness already

$
0
0

Time for a quick lesson in blog history.  There'll be a quiz at the end!  Ok not really, but some history all the same.

Back a few years ago when we here at Microsoft were just starting to get savvy to this whole blog thing, one of our support escalation engineers, Tim Springston, decided to start up a blog about Active Directory.  You might have seen it in the past.  Over the years he's posted some really great insights and posts there that are definitely worth reading if you have the time.

Of course, the rest of us decided to do something completely different and started up AskDS a little later.  Rumor has it that it had something to do with a high-stakes poker game (Tim *is* from Texas, after all), but no one is really sure why we wound up with two support blogs to be honest - it's one of those things that just sort of happened.

Anyway, all this time while we've been partying it up over here on TechNet, our AD product team has been marooned over on MSDN with an audience of mostly developers.  Not that developers are bad folks - after all, they make the apps that power pretty much everything - but the truth is that a lot of what we do in Active Directory in terms of feature development is also targeted at Administrators and Architects and IT Pros.  You know, the people who read blogs on TechNet and may not think to also check MSDN.

After a lot of debate and discussion internally, the AD product team came to the conclusion that they really should have a presence on TechNet so that they could talk to everyone here about the cool features they're working on.

The problem?  Well, we sort of had a monopoly over here in support on AD-related blog names. :)

Meetings were convened.  Conferences were held.  Email flew back and forth.  Their might even have been some shady dealings involving gifts of sugary pastries.  In the end though, Tim graciously agreed to move his blogging efforts over to AskDS and cede control of http://blogs.technet.com/ad to the Active Directory Product team.

The result?  Everyone wins.  Tim's now helping us write cool stuff for AskDS (you'll see plenty of that in the near future, I'm sure), and the product team has already startedpostinga bunchof things that you might have missed when they were on MSDN.

If you haven't seen what they're up to over there, go and take a look .  And as we get out of summer and get our people back from vacation, and, you know, roll a whole new server OS out the door, keep an eye on both blogs for updates, tips, explanations, and all manner of yummy AD-related goodness.

 

--David "Wait, we get another writer for AskDS??" Beach

Roaming Profile Compatibility - The Windows 7 to Windows 8 Challenge

$
0
0

[Editor's note:  Everything Mark mentions for Windows 8 clients here is also true for Windows 8.1 clients.  Windows 8 and Windows 8.1 clients use the same (v3) profile version, so the 8.1 upgrade will not prevent this from happening if you have roaming profiles in your environment.  Something to be aware of if you're planning to migrate users over to the new OS version. -David]

 

Hi. It’s Mark Renoden, Senior Premier Field Engineer in Sydney, Australia here again. Today I’ll offer a workaround for an issue that’s causing a number of customers around the world a degree of trouble. It turns out to be reasonably easy to fix, perhaps just not so obvious.

The Problem

The knowledge base article "Unpredictable behavior if you migrate a roaming user profile from Windows 8 to Windows 7" - http://support.microsoft.com/kb/2748329 states:

Windows 7 and Windows 8 use similar user profile formats, which do not support interoperability when they roam between computers that are running different versions of Windows. When a user who has a  windows 7 profile signs in to a Windows 8-based computer for the first time, the user profile is updated to the new Windows 8 format. After this occurs, the user profile is no longer compatible with Windows 7-based computers. See the "More information" section for detailed information about how this issue affects roaming and mandatory profiles.

This sort of problem existed between Windows XP and Windows Vista/7 but was mitigated by Windows Vista/7 using a profile that used a .v2 extension.  The OS would handle storing the separate profiles automatically for you when roaming between those OS versions.  With Windows 7 and Windows 8, both operating systems use roaming profiles with a .v2 extension, even though Windows 8 is actually writing the profile in a newer format.

Mark’s Workaround

The solution is to use separate roaming profiles for each operating system by utilizing an environment variable in the profile path.

Configuration

File server for profiles:

  1. Create profile share “\\Server\ProfilesShare” with permissions configured so that users have write access
  2. In ProfilesShare, create folders “Win7” and “Win8”


 

Active Directory:

  1. Create OU for Windows 7 Clients (say “Win7OU”) and create/link a GPO here (say “Win7GPO”)
  2. Create OU for Windows 8 Clients (say “Win8OU”) and create/link a GPO here (say “Win8GPO”)

Note:As an alternative to separate OUs, a WMI filter may be used to filter according to Operating System:

Windows 7 - SELECT version FROM Win32_OperatingSystem WHERE Version LIKE “6.1%” and ProductType = “1″

Windows 8 - SELECT version FROM Win32_OperatingSystem WHERE Version LIKE “6.2%” and ProductType = “1″

3. Edit Win7GPO

    1. Expand Computer Configuration -> Preferences -> Windows Settings
    2. Under Environment create an environment variable with
      1. Action: Create
      2. System Variable
      3. Name: OSVer
      4. Value: Win7

4. Edit Win8GPO

    1. Expand Computer Configuration -> Preferences -> Windows Settings
    2. Under Environment create an environment variable with
      1. Action: Create
      2. System Variable
      3. Name: OSVer
      4. Value: Win8

5. Set user profile paths to \\Server\ProfilesShare\%OSVer%\%username%\

 

Clients:

  1. Log on with administrative accounts first to confirm creation of the OSVer environment variable

2. Log in as users and you’ll observe that different user profiles are created in the appropriate folder in the profiles share depending on client OS

Conclusion

I haven't run into any issues in testing but this might be one of those cases where it's important to use "wait for network". My testing suggests that using "create" as the action on the environment variable mitigates any timing issues.  This is because after the environment variable is created for the machine, this variable persists across boots and doesn't depend on GPP re-application.

You may also wish to consider the use (and testing) of a folder redirection policy to provide users with their data as they cross between Windows 7 and Windows 8 clients. While I have tested this to work with
“My Documents”, there may be varying degrees of success here depending on how Windows 8’s modern apps fiddle with things.

 - Mark “Square Peg in a Round Hole” Renoden

 

 

DFS Replication in Windows Server 2012 R2 and other goodies, now available on the Filecab blog!

$
0
0

Over at the Filecab blog, AskDS alum and all-around nice guy Ned Pyle has posted the first of several blogs about new features coming your way in Windows Server 2012 R2.  If you're a DFS administrator or just curious, go take a look!

Ned promises more posts in the near future, and Filecab is near and dear to our hearts here in DS (they make a bunch of things we support), so if you don't already have it on your RSS feed list, it might be a good time to add it.

Viewing all 130 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>