You are on page 1of 150

How Core Group Policy Works

Updated: March 28, 2003


How Core Group Policy Works
In this section

• Core Group Policy Architecture

• Core Group Policy Physical Structure

• Core Group Policy Processes and Interactions

• Network Ports Used by Group Policy

• Related Information
Core Group Policy or the Group Policy engine is the infrastructure that processes Group Policy
components including server-side snap-in extensions and client-side extensions. You use
administrative tools such as Group Policy Object Editor and Group Policy Management Console to
configure and manage policy settings.
At a minimum, Group Policy requires Windows 2000 Server with Active Directory installed and
Windows 2000 clients. Fully implementing Group Policy to take advantage of all available
functionality and the latest policy settings depends on a number of factors including:
Windows Server 2003 with Active Directory installed and with DNS properly

configured.
• Windows XP client computers.

• Group Policy Management Console (GPMC) for administration.


Top of page
Core Group Policy Architecture
The Group Policy engine is a framework that handles client-side extension (CSE) processing and
interacts with other elements of Group Policy, as shown in the following figure:
Core Group Policy Architecture

The following table describes the components that interact with the Group Policy engine.

1
Core Group Policy Components

Component Description

Server (domain In an Active Directory forest, the domain controller is a server that contains a
controller) writable copy of the Active Directory database, participates in Active Directory
replication, and controls access to network resources.

Active Directory Active Directory, the Windows-based directory service, stores information about
objects on a network and makes this information available to users and network
administrators. Administrators link Group Policy objects (GPOs) to Active Directory
containers such as sites, domain, and organizational units (OUs) that include user
and computer objects. In this way, policy settings can be targeted to users and
computers throughout the organization.

Sysvol The Sysvol is a set of folders containing important domain information that is
stored in the file system rather than in the directory. The Sysvol folder is, by
default, stored in a subfolder of systemroot folder (%\systemroot\sysvol\sysvol)
and is automatically created when a server is promoted to a domain controller.
The Sysvol contains the largest part of a GPO: the Group Policy template, which
includes Administrative Template-based policy settings, security settings, script
files, and information regarding applications that are available for software
installation. It is replicated through the File Replication Service (FRS) between all
domain controllers in a domain.

Group Policy A GPO is a collection of Group Policy settings, stored at the domain level as a
object (GPO) virtual object consisting of a Group Policy container and a Group Policy template.
The Group Policy container, which contains information about the properties of a
GPO, is stored in Active Directory on each domain controller in the domain. The
Group Policy template contains the data in a GPO and is stored in the Sysvol in the
/Policies subdirectory. GPOs affect users and computers that are contained in
sites, domains, and OUs.

Local Group The Local Group Policy object (Local GPO) is stored on each individual computer,
Policy object in the hidden Windows\System32\GroupPolicy directory. Each computer
running Windows 2000, Windows XP Professional, Windows XP 64-Bit Edition,
Windows XP Media Center Edition, or Windows Server 2003 has exactly one Local
GPO, regardless of whether the computers are part of an Active Directory
environment.
Local GPOs are always processed, but are the least influential GPOs in an Active
Directory environment, because Active Directory-based GPOs have precedence.

Winlogon A component of the Windows operating system that provides interactive logon
support, Winlogon is the service in which the Group Policy engine runs.

Group Policy The Group Policy engine is the framework that handles common functionalities
engine across registry-based settings and client-side extensions (CSEs).

Client-side CSEs run within dynamic-link libraries (DLLs) and are responsible for
extensions implementing Group Policy at the client computer.

2
Component Description

The CSEs are loaded on an as-needed basis when a client computer is processing
policy

File system The NTFS file system on client computers.

Registry A database repository for information about a computer’s configuration, the


registry contains information that Windows continually references during
operation, such as:

• Profiles for each user.


• The programs installed on the computer and the types of documents that
each can create.

• Property settings for folders and program icons.


• The hardware on the system.
• Which ports are being used.
The registry is organized hierarchically as a tree, and it is made up of keys and
their subkeys, hives, and entries.
Registry settings can be controlled through Group Policy, specifically,
Administrative Templates (.adm files). Windows Server 2003 comes with a
predefined set of Administrative Template files, which are implemented as text
files (with an .adm extension), that define the registry settings that can be
configured in a GPO. These .adm files are stored in two locations by default: inside
GPOs in the Sysvol folder and in the Windows\inf directory on the local
computer.

Event log The Event log is a service, located in Event Viewer, which records events in the
system, security, and application logs.

Help and Support The Help and Support Center is a component on each computer that provides
Center HTML reports on the policy settings currently in effect on the computer.

Resultant Set of All Group Policy processing information is collected and stored in a Common
Policy (RSoP) Information Model Object Management (CIMOM) database on the local computer.
infrastructure This information, such as the list, content, and logging of processing details for
each GPO, can then be accessed by tools using Windows Management
Instrumentation (WMI).

WMI WMI is a management infrastructure that supports monitoring and controlling of


system resources through a common set of interfaces and provides a logically
organized, consistent model of Windows operation, configuration, and status.
WMI makes data about a target computer available for administrative use. Such
data can include hardware and software inventory, settings, and configuration
information. For example, WMI exposes hardware configuration data such as CPU,
memory, disk space, and manufacturer, as well as software configuration data

3
Component Description

from the registry, drivers, file system, Active Directory, the Windows Installer
service, networking configuration, and application data. WMI filtering in Windows
Server 2003 allows you to create queries based on this data. These queries (WMI
filters) determine which users and computers receive all of the policy configured in
the GPO where you create the filter.
Top of page
Group Policy Engine Architecture
The primary purpose of Group Policy is to apply policy settings to computers and users in an Active
Directory domain. GPOs can be targeted through Active Directory containers, such as sites,
domains, and OUs, containing user or computer objects. The Group Policy engine is in userenv.dll,
which runs inside the Winlogon service. This is shown in the following figure.
Group Policy Engine Architecture and CSE Components

Group Policy Engine Architecture and CSE Components

Component Description

Group Policy The framework that handles functionalities across CSEs, the Group Policy engine
Engine runs inside userenv.dll.

Winlogon.exe A component of the Windows operating system that provides interactive logon
support, Winlogon is the service in which the Group Policy engine runs. Winlogon is
the only system component that actively interacts with the Group Policy engine.

4
Component Description

Userenv.dll Userenv.dll runs inside Winlogon and contains the Group Policy engine and the
Administrative Templates extension.

gptext.dll Used to configure Scripts, IP Security, QoS Packet Scheduler, and Wireless settings.

fdeploy.dll Used to configure folder redirection

scecli.dll Used to configure security settings.

iedkcs32.dll Used to manage various Internet Explorer settings.

appmgmts.dll Used to configure software installation settings.

dskquota.dll Used for setting disk quotas.


Top of page
RSoP Architecture
Resultant Set of Policy (RSoP) uses WMI to determine how policy settings are applied to users and
computers. RSoP has two modes: logging mode and planning mode. Logging mode determines the
resultant effect of policy settings that have been applied to an existing user and computer based on
a site, domain, and OU. Logging mode is available on Windows XP and later operating systems.
Planning mode simulates the resultant effect of policy settings that are applied to a user and
computer. Planning mode requires a Windows Server 2003 computer as a domain controller. For
RSoP functionality, using GPMC is recommended, which includes RSoP features integrated with the
rest of GPMC. In GPMC, RSoP logging mode is referred to as Group Policy Results; planning mode is
referred to as Group Policy Modeling.
The following figure shows the high-level architecture of RSoP for Group Policy Results and Group
Policy Modeling:
RSoP Architecture

Windows Server 2003 collects Group Policy processing information and stores it in a WMI database
on the local computer. (The WMI database is also known as the CIMOM database.)This information,

5
such as the list, content and logging of processing details for each GPO, can then be accessed by
tools using WMI.
In Group Policy Results, RSoP queries the WMI database on the target computer, receives
information about the policies and displays it in GPMC. In Group Policy Modeling, RSoP simulates the
application of policy using the Resultant Set of Policy Provider on a domain controller. Resultant Set
of Policy Provider simulates the application of GPOs and passes them to virtual CSEs on the domain
controller. The results of this simulation are stored to a local WMI database on the domain controller
before the information is passed back and displayed in GPMC (or the RSoP snap-in). This is
explained in greater detail in the following section.
Top of page
WMI and CIMOM
WMI provides a common scriptable interface to retrieve, and in some cases set, a wide variety of
system and application information. WMI is implemented through the winmgmt.exe service. The
WMI information hierarchy is modeled as a hierarchy of objects following the Common Information
Model (CIM) standards.
This information hierarchy is extensible, which allows different applications and services to expose
configuration information by supplying a WMI provider. WMI providers are the interface between the
WMI service and the application's data in its native format.
WMI data can be dynamic (generated on demand when required by a management application) or
static. Static data is stored in the CIMOM database. This data can be accessed at any time (security
controls permitting) by management applications. RSoP uses WMI and the CIMOM to write, store
and query RSoP settings information.
Top of page
Resultant Set of Policy Provider
RSoP in planning mode has the special requirement that no settings information is actually applied
to the client system during the RSoP data generation. In fact, in many planning scenarios, there
might not be a computer or user object to apply the settings to. To meet this requirement, the RSoP
provider runs on domain controllers and performs some of the functions of a client system for GPO
application.
The RSoP provider is actually a WMI provider that performs the role of Winlogon in invoking CSEs to
log RSoP information to the CIM repository. It takes parameters supplied by the RSoP wizard to
select GPOs from the directory. It uses the following parameters:
Scope of management (SOM). This is the combination of user object OU and computer object

OU and Site (although the latter is optional). Either the User or Computer SOM can be omitted
but one must be specified. This can be specified as either an existing computer, user, or both, or
as existing OUs for the computer, user, or both.
Security group memberships for the computer and user objects. By default, these are the

existing security group memberships if actual user or computer objects are chosen as the SOM.
Security filtering can be ignored entirely or a new or a modified set of groups can be chosen.
• WMI filter. New or modified WMI filters can be applied to the GPOs during the RSoP generation.
RSoP provider is a service on a domain controller and runs in system context. There are two
ramifications to this design. The service manually evaluates the security descriptor of each GPO in
the SOM against the user object and computer object security identifiers (SIDs) and their security
group membership. If Active Directory is locked down, some security group membership analyses
might fail until the user is provided the correct access.

6
In addition, because the RSoP provider has Domain Admin-equivalent access rights, some control
needs to be placed on who can generate RSoP information in the directory. This control is achieved
by an extended access right — GenerateRSoPData. To execute an RSoP session for a particular
container, the user must have the Generate RSoP (planning) access right for that container.
Top of page
Planning Mode (Group Policy Modeling)
In planning mode, RSoP provider performs the RSoP data generation in the following steps:
1. RSoP tool gets the user, computer and domain controller name from the wizard.
2. RSoP connects to the WMI database on the specified domain controller.
3. WMI service in turn calls the WMI Provider (an out-of-process service provider) on the same
computer to create the RSoP data.
4. RSoP provider gets the list GPOs for the user and computer from Active Directory.
5. RSoP provider populates the WMI database with instances of the GPOs for user and computer.
6. The list of registered CSEs is retrieved. Each of the policy extensions is dynamically loaded in
succession by the RSoP provider and the list of computer and user GPOs is passed to each
extension. Each policy extension takes the list of GPOs and instead of applying the policy, it
populates the WMI database with instances of policy objects that describe the effective policy.
7. After the WMI database population is over, RSoP provider returns the namespace under which
the RSoP data was created to the WMI service. This returns the namespace to the RSoP tool.
8. The RSoP tool connects to the namespace on the WMI database on the domain controller. RSoP
navigates or iterates through the populated data in the WMI database using WMI enumeration
APIs to retrieve the policy data.
9. The RSoP data is displayed to the user. When the user is done looking at the RSoP data, the
RSoP tool calls RsopDeleteSession on the WMI service to delete the data in WMI's database that
was previously created.
Top of page
Logging Mode (Group Policy Results)
In logging mode, the RSoP data generation is controlled by Winlogon and is part of the normal GPO
processing operation.
The Winlogon process retrieves the list of GPOs from the Active Directory using the security context
of the user or computer.

• Winlogon process populates the WMI database with instances of the GPOs.
The list of registered policy extensions is retrieved. Each of the policy extensions is dynamically

loaded in succession by the Winlogon process and the list of GPOs is passed to each policy
extension. Each extension takes the list of GPOs and, in addition to applying the policy, it
populates the WMI database with the policies set and which GPOs applied them.
Top of page
Core Group Policy Physical Structure
Understanding where GPOs are stored and how they are structured can help you troubleshoot
problems you might encounter when you implement Group Policy. Although GPOs can be linked to
sites, domains, and OUs, they are stored only in the domain. As explained earlier, a GPO is a virtual
object that stores its data in two locations: a Group Policy container and a Group Policy template.
Top of page
Group Policy Container
A Group Policy container is a location in Active Directory that stores GPOs and their properties. The
properties of a GPO include both computer and user Group Policy information. The Policies container
is the default location of GPOs. The path to the Policies container, in Lightweight Directory Access
Protocol (LDAP) syntax, is CN=Policies,CN=System,DC=Domain_Name,DC=Domain_Name,
where the Domain_Name values specify a fully qualified domain name (FQDN).

7
The Active Directory store contains the Group Policy container of each GPO in the domain. The
Group Policy container contains attributes that are used to deploy GPOs to the domain, to OUs, and
to sites within the domain. The Group Policy container also contains a link to the file system
component of a GPO — the Group Policy template. Some of the information in a Group Policy
container includes:
Version information. Ensures that the information is synchronized with the Group Policy

template information.
Status information. Indicates whether the user or computer portion of the GPO is enabled or

disabled.
List of components. Lists (extensions) that have settings in the GPO. These attributes are

gPCMachineExtensionNames and gPCUserExtensionNames.
File system path. Specifies the Universal Naming Convention (UNC) path to the Sysvol folder.

This attribute is gPCFileSysPath.
Functionality version. Gives the version of the tool that created the GPO. Currently, this is

version 2. This attribute is gPCFunctionalityVersion.
• WMI filter. Contains the distinguished name of the WMI filter. This attribute is gPCWQLFilter.
Top of page
System Container
Each Windows Server 2003 domain contains a System container. The System container stores per-
domain configuration settings, including GPO property settings, Group Policy container settings, IP
Security settings, and WMI policy settings. IP Security and WMI policy are deployed to client
computers through the GPO infrastructure.
The following subcontainers of the System container hold GPO-related settings:
Policies. This object contains groupPolicyContainer objects listed by their unique name. Each

groupPolicyContainer object holds subcontainers for selected computer and user policy settings.
Domain, OUs and Sites. These objects contain two GPO property settings, gPLink and

gPOptions.
Default Domain Policy. This object contains the AppCategories container, which is part of the

Group Policy Software installation extension.
IP Security. This object contains IP Security policy settings that are linked to a GPO. The linked

IP Security policy is applied to the recipients (user or computer) of the GPO.
WMIPolicy. This object contains WMI filters that can be applied to GPOs. WMI filters contain one

or more Windows Query Language (WQL) statements.
Top of page
System\Policies Container
The System container is a top level container found in each domain naming context. It is normally
hidden from view in the Active Directory Users and Computers snap-in but can be made visible by
selecting “Advanced Features” from the snap-in View menu inside MMC. (Objects appear hidden in
the Active Directory Users and Computers snap-in when they have the property
showInAdvancedViewOnly = TRUE.) Group Policy information is stored in the Policies subcontainer
of this container. Each GPO is identified by a GroupPolicyContainer object stored within the Policies
container.
The Group Policy container is located in the Domain_Name/System/Policies container. Each
Group Policy container is given a common name (CN) and this name is also assigned as the
container name. For example, the name attribute of a Group Policy container, might be:
{923B9E2F-9757-4DCF-B88A-1136720B5AF2}, which is also assigned to the Group Policy
container's CN attribute.

8
The default GPOs are assigned the same Group Policy container CN on all domains. All other GPOs
are assigned a unique CN. The default GPOs and their Group Policy container common names are:

• Default Domain Policy: {31B2F340-016D-11D2-945F-00C04FB984F9}.

• Default Domain Controllers Policy: {6AC1786C-016F-11D2-945F-00C04fB984F9}.


Knowing the common names of the default GPOs will help you distinguish them from non-default
GPOs.
Top of page
How a Group Policy Container is Named
Group Policy containers are named automatically when they are created. The CN of each Group
Policy container is a GUID (Globally Unique Identifier). This is distinct from and unrelated to the
Object GUID given to each Active Directory object. The CN is the name of the Group Policy
container used to ensure uniqueness of Group Policy container names within the Policies container.
There is no requirement for these GUIDs to be unique between domains (the Default Domain Policy
and the Default Domain Controllers Policy GPOs each have identical GUIDs in all Active Directory
installations). However, an Object GUID is always unique across all installations of the Active
Directory store.
The following table shows permissions on Group Policy container:
Default Group Policy Container Permissions

Trustee Access

Authenticated Users Read, Apply Group Policy

Domain Admins Read, Write

System Read, Write


Top of page
GPO Attributes in the Policies CN
GPOs are created by instantiating the groupPolicyContainer class in the Active Directory schema and
storing the resulting GPO in the System/Policies container of the Active Directory store. After
creating a GPO, you can review its CN from the Active Directory Users and Computers snap-in by
enabling the Advanced view and then expanding the Policies CN. You can review all GPO attributes
and their values from the Active Directory Services Interface Editor snap-in, ADSI Edit.
Object attributes are either mandatory or optional, as defined in the Active Directory schema. The
CN attribute is mandatory for the class Container, Group Policy container's parent class. Three
attributes — instanceType, objectCategory, and objectClass — are mandatory for the class Top,
CN's parent class. Thus the Group Policy container class inherits all four mandatory attributes. The
following table describes the mandatory attributes:
Mandatory Attributes of the groupPolicyContainer Class

Name Description

CN The common name of the GPO. This is in the form of a GUID to avoid GPO naming
conflicts within the Policies container.

instanceType An attribute that dictates how an object is instantiated from its class on a particular
server. In this case, it describes how the groupPolicyContainer class is created into a

9
Name Description

GPO in the Active Directory. A GPO is assigned the instanceType value of 4.

objectCategory An object class name, including the object's path, used to group object's of the
instantiated class. For example, the objectCategory of a GPO in the contoso.com
domain is: CN=Group-Policy-
Container,CN=Schema,CN=Configuration,DC=contoso,DC=com.

objectClass The list of classes from which this class is derived. For a GPO, the objectClass is
Container, groupPolicyContainer, and top.
There are also a number of optional attributes inherited from the top class, and others that are
assigned directly to the Group Policy container. Many optional attributes are required in order for
the Group Policy container to function properly. For example, the GPCFileSysPath optional attribute
must be present or the Group Policy container will not be linked to its corresponding Group Policy
template.
Top of page
GroupPolicyContainer Subcontainers
Within the GroupPolicyContainer there are a series of subcontainers. The first level of subcontainers
— User and Machine — belong to the class Container. These two containers are used to separate
some User-specific and Computer-specific Group Policy components.
Top of page
Group Policy Container-Related Attributes of Domain, Site, and OU
Containers
Windows Server 2003 uses domain, DNS, site, and organizational unit classes to create domain,
site, and OU container objects respectively. These objects contain two optional Group Policy
container-related attributes, gPLink and gPOptions. The gPLink property contains the prioritized
list of GPOs and the gPOptions property contains the Block Policy Inheritance setting.
The gPLink attribute holds a list of all Group Policy containers linked to the container and a number
for each listed Group Policy container, that represents the Enforced (previously known as No
Override) and Disabled option settings. The list appears in priority order from lowest to highest
priority GPO.
The gPOptions attribute holds an integer value that indicates whether the Block Policy Inheritance
option of a domain or OU is enabled (0) or disabled (1).
Top of page
Managing Group Policy Links for a Site, Domain, or OU
To manage GPO links to a site, domain, or OU, you must have read and write access to the gPLink
and gPOptions properties. By default, Domain Admins have this permission for domains and
organizational unit, and only Enterprise Admins and Domain Admins of the forest root domain can
manage links to sites. Active Directory supports security settings on a per-property basis. This
means that a non-administrator can be delegated read and write access to specific properties. In
this case, if non-administrators have read and write access to the gPLink and gPOptions properties,
they can manage the list of GPOs linked to that site, domain, or OU.
Top of page
How WMIPolicy Objects are Stored and Associated with Group
Policy Container Objects

10
A single WMI filter can be assigned to a Group Policy container. The Group Policy container stores
the distinguished name of the filter in gPCWQLFilter attribute. The Group Policy container locates
the assigned filter in the System/WMIPolicy/SOM container. Each Windows Server 2003 domain
stores its WMI filters in this Active Directory container. Each WMI filter stored in the SOM container
lists the rules that define the WMI filter. Each rule is listed separately. For example, consider a WMI
filter containing the following three WQL queries:
SELECT * FROM Win32_Product WHERE IdentifyingNumber = "{5E076CF2-EFED-43A2-A623-
13E0D62EC7E0}"

SELECT * FROM Win32_Product WHERE IdentifyingNumber = "{242365CD-80F2-11D2-989A-


00C04F7978A9}"

SELECT * FROM Win32_Product WHERE IdentifyingNumber = "{00000409-78E1-11D2-B60F-


006097C998E7}"

Three WMI rules are defined in the details of the filter. Each rule contains a number of attributes,
including the query language (WQL) and the WMI namespace queried by the rule.
Top of page
Group Policy Template
The majority of Group Policy settings are stored in the file system of the domain controllers. This
part of each GPO is known as the Group Policy template. The GroupPolicyContainer object for each
GPO has a property, GPCFileSysPath, which contains the UNC path to its related Group Policy
template.
All Group Policy templates in a domain are stored in the
\\domain_name\Sysvol\domain_name\Policies folder, where domain_name is the FQDN of
the domain. The Group Policy template for the most part stores the actual data for the policy
extensions, for example Security Settings inf file, Administrative Template-based policy settings
.adm and .pol files, applications available for the Group Policy Software installation extension, and
potentially scripts.
Top of page
The Gpt.ini File
The Gpt.ini file is located at the root of each Group Policy template. Each Gpt.ini file contains GPO
version information. Except for the Gpt.ini files created for the default GPOs, a display name value
is also written to the file.
Each Gpt.ini file contains the GPO version number of the Group Policy template.
[General]

Version=65539

Normally, this is identical to the version-number property of the corresponding


GroupPolicyContainer object. It is encoded in the same way — as a decimal representation of a 4
byte hexadecimal number, the upper two bytes of which contain the GPO user settings version and
the lower two bytes contain the computer settings version. In this example the version is equal to
10003 hexadecimal giving a user settings version of 1 and a computer settings version of 3.
Storing this version number in the Gpt.ini allows the CSEs to check if the client is out of date to the
last processing of policy settings or if the currently applied policy settings (cached policies) are up-
to-date. If the cached version is different from the version in the Group Policy template or Group
Policy container, then policy settings will be reprocessed.
Top of page

11
Group Policy Template Subfolders
The Group Policy template folder contains the following subfolders:
Machine. Includes a Registry.pol file that contains the registry settings to be applied to

computers. When a computer initializes, this Registry.pol file is downloaded and applied to the
HKEY_LOCAL_MACHINE portion of the registry. The Machine folder can contain the following
subfolders (depending on the contents of the GPO):

• Scripts\Startup. Contains the scripts that are to run when the computer starts up.

• Scripts\Shutdown. Contains the scripts that are to run when the computer shuts down.
Applications. Contains the advertisement files (.aas files) used by the Windows installer.

These are applied to computers.
Microsoft\Windows NT\Secedit. Contains the Gpttmpl.inf file, which includes the default

security configuration settings for a Windows Server 2003 domain controller.
• Adm. Contains all of the .adm files for the GPO.
User. Includes a Registry.pol file that contains the registry settings to be applied to users. When

a user logs on to a computer, this Registry.pol file is downloaded and applied to the
HKEY_CURRENT_USER portion of the registry. The User folder can contain the following
subfolders (depending on the contents of the GPO):
Applications. Contains the advertisement files (.aas files) used by the Windows installer.

These are applied to users.
Documents and Settings. Contains the Fdeploy.ini file, which includes status information

about the Folder Redirection options for the current user's special folders.
Microsoft\RemoteInstall. Contains the OSCfilter.ini file, which holds user options for

operating system installation through Remote Installation Services.
• Microsoft\IEAK. Contains settings for the Internet Explorer Maintenance snap-in.

• Scripts\Logon. Contains all the user logon scripts and related files for this GPO.

• Scripts\Logoff. Contains all the user logoff scripts and related files for this GPO.
The User and Machine folders are created at install time, and the other folders are created as
needed when policy is set.
The permissions of each Group Policy template reflect the read and write permissions applied to the
GroupPolicyContainer through the Group Policy Object Editor. These permissions are automatically
maintained and are shown in the following table.
Default Group Policy Template Permissions

Trustee Access

Authenticated Users Read and Execute

Administrators Full Control

Group Policy Creator Owners Read and Execute

Creator Owner Full Control (Subfolders and Files only)

System Full Control


Top of page
Group Policy Object Editor use of Sysvol
Each policy setting changed in a GPO causes at least two files to be rewritten — the GPT.ini and the
file holding the changed setting. Making many changes to a GPO can cause a lot of network traffic
as Sysvol replicates these changes. This congestion should only occur on a local area network where

12
Sysvol replication occurs frequently. Across wide area network links, the inter-site replication
schedule will cause these changes to be amalgamated into a smaller amount of traffic (for example,
four changes to the Registry.pol file will result in only a single file replication).
Top of page
The Local Group Policy Object
The Local GPO has no Active Directory component. Information stored in the Group Policy container
of an Active Directory GPO is instead stored in the Group Policy template of a Local GPO. The Group
Policy template of a Local GPO is located in the Windows\system32\GroupPolicy folder. The
Gpt.ini file in this GroupPolicy folder must hold more management information than its counterpart
in a domain-based GPO because there is no Active Directory component to hold this information.
The following table shows the attributes for the Group Policy template.ini file.
Local GPO GPT.INI Attributes

Attribute Description

gPCUserExtensionNames Includes a list of GUIDs that tells the client-side engine which CSEs
have User data in the GPO.
The format is: [{GUID of CSE}{GUID of MMC extension}{GUID of
second MMC extension if appropriate}][repeat first section as
appropriate].

GPCMachineExtensionNames Includes a list of GUIDs that tells the client-side engine which CSEs
have computer data in the GPO.

Options Refers to GPO options such as User portion disabled or Computer


portion disabled.
The following extensions are disabled in a Local GPO:
Group Policy Software installation

extension
• Folder Redirection
The following extensions have reduced functionality in a Local GPO:

• Public Key policies; EFS only; there are not any options for trust lists or auto enrollment.
Security Settings; there are not any options for Restricted Groups or File System, Registry or

Service Access Control Lists (ACLs).
Top of page
Core Group Policy Processes and Interactions
Application of GPOs to targeted users and computers relies on many interactive processes. This
section explains how GPOs are applied and filtered to Active Directory containers such as sites,
domains, and OUs. It includes information about how the Group Policy engine processes GPOs in
conjunction with CSEs. In addition, it explains how Group Policy is replicated among domain
controllers.
Top of page
Group Policy Processing Rules
GPOs that apply to a user or computer do not all have the same precedence. Settings that are
applied later can override settings that are applied earlier. Group Policy settings are processed in
the following order:
Local Group Policy object. Each computer has exactly one Group Policy object that is stored

locally. This processes for both computer and user Group Policy processing.

13
Site. Any GPOs that have been linked to the site that the computer belongs to are processed

next. Processing is in the order that is specified by the administrator, on the Linked Group Policy
Objects tab for the site in GPMC. The GPO with the lowest link order is processed last, and
therefore has the highest precedence.
Domain. Processing of multiple domain-linked GPOs is in the order specified by the

administrator, on the Linked Group Policy Objects tab for the domain in GPMC. The GPO with the
lowest link order is processed last, and therefore has the highest precedence.
Organizational units. GPOs that are linked to the organizational unit that is highest in the

Active Directory hierarchy are processed first, then GPOs that are linked to its child organizational
unit are processed, and so on. Finally, the GPOs that are linked to the organizational unit that
contains the user or computer are processed.
To summarize, the Local GPO is processed first, and the organizational unit to which the computer
or user belongs (the one that it is a direct member of) is processed last. All of this processing is
subject to the following conditions:

• WMI or security filtering that has been applied to GPOs.


Any domain-based GPO (not Local GPO) can be enforced by using the Enforce option so that its

policies cannot be overwritten. Because an Enforced GPO is processed last, no other settings can
write over the settings in that GPO. If you have more than one Enforced GPO, it's possible to set
the same setting in each GPO to a different value, in which case, the link order of the GPOs
determines which one contains the final settings.
At any domain or organizational unit, Group Policy inheritance can be selectively designated as

Block Inheritance. However, because enforced GPOs are always applied, and cannot be
blocked, blocking inheritance does not prevent policy from Enforced GPOs from applying.
Every computer has a single Local GPO that is always processed regardless of whether the computer
is part of a domain or is a stand-alone computer. The Local GPO can't be blocked by domain-based
GPOs. However, settings in domain GPOs always take precedence since they are processed after the
Local GPO.
Top of page
Targeting GPOs
The site, domain, and OU links from a GPO are used as the primary targeting principle for defining
which computers and users should receive a GPO. Security filtering and WMI filtering can be used to
further reduce the set of computers and users to which the GPO will apply. The Group Policy engine
uses the following logic in processing GPOs: If a GPO is linked to a domain, site, or OU that applies
to the user or computer, the Group Policy engine must then determine whether the GPO should be
added to its GPO list for processing. A GPO is blocked from processing in the following
circumstances:
The GPO is disabled. You disable either or both the computer or user components of a GPO

from its Policy Properties dialog box.
The computer or user does not have permission to read and apply the GPO. You control

permission to a GPO through security filtering, as explained in the following section.
A WMI filter applied to a GPO evaluates to false on the client computer. A WMI filter must

evaluate to true before the Group Policy engine will allow it to be processed, as explained in the
following section.
Top of page
Security Filtering
Security filtering is a way of refining which users and computers will receive and apply the settings
in a GPO. By using security filtering to specify that only certain security principals within a container

14
where the GPO is linked apply the GPO, you can narrow the scope of a GPO so that it applies only to
a single group, user, or computer. Security filtering determines whether the GPO as a whole applies
to groups, users, or computers; it cannot be used selectively on different settings within a GPO.
In order for the GPO to apply to a given user or computer, that user or computer must have both
Read and Apply Group Policy (AGP) permissions on the GPO, either explicitly, or effectively
though group membership.
By default, all GPOs have Read and AGP both Allowed for the Authenticated Users group. The
Authenticated Users group includes both users and computers. This is how all authenticated users
receive the settings of a new GPO when it is applied to an organizational unit, domain or site.
Therefore, the default behavior is for every GPO to apply to every Authenticated User. By default,
Domain Admins, Enterprise Admins, and the local system have full control permissions, without the
Apply Group Policy access-control entry (ACE). However, administrators are members of
Authenticated Users, which means that they will receive the settings in the GPO by default.
These permissions can be changed to limit the scope to a specific set of users, groups, or computers
within the organizational unit, domain, or site. The Group Policy Management Console manages
these permissions as a single unit, and displays the security filtering for the GPO on the GPO Scope
tab. In GPMC, groups, users, and computers can be added or removed as security filters for each
GPO.
Top of page
How Security Filtering is Processed
Before processing a GPO, the Group Policy engine checks the Access Control List ACL associated
with the GPO. If an ACE on a GPO denies a security principal to which the computer or user belongs,
either the Apply Group Policy or Read permission, the Group Policy engine does not add the GPO to
its list of GPOs to process. Additionally, an ACE on a GPO must allow the appropriate security
principal both Apply Group Policy and Read permissions in order for the Group Policy engine to add
the GPO to the GPO processing list.
If appropriate permissions are granted to the GPO, it is added to the list of GPOs to download.
In general, Deny ACEs should be avoided because you can achieve the same results by granting or
not granting Allow permissions.
Top of page
WMI Filtering
WMI makes data about a target computer available for administrative use. Such data can include
hardware and software inventory, settings, and configuration information. For example, WMI
exposes hardware configuration data such as CPU, memory, disk space, and manufacturer, as well
as software configuration data from the registry, drivers, file system, Active Directory, the Windows
Installer service, networking configuration, and application data.
WMI filtering allows you to filter the application of a GPO by attaching a Windows Query Language
query to a GPO. The queries can be written to query WMI for multiple items. If the query returns
true for all queried items, then the GPO will be applied to the target user or computer.
When a GPO that is linked to a WMI filter is applied on the target computer, the filter is evaluated
on the target computer. If the WMI filter evaluates to false, the GPO is not applied (except if the
client computer is running Windows 2000, in which case the filter is ignored and the GPO is always
applied). If the WMI filter evaluates to true, the GPO is applied.
The WMI filter is a separate object from the GPO in the directory. A WMI filter must be linked to a
GPO in order to apply. Each GPO can have only one WMI filter; however the same WMI filter can be

15
linked to multiple GPOs. WMI filters, like GPOs, are stored only in domains. A WMI filter and the
GPO it is linked to must be in the same domain.
Top of page
How WMI Filtering is Processed
If, after security filtering, appropriate permissions are granted to the GPO, it is added to the list of
GPOs to download. Upon download, the Group Policy engine reads the gPCWQLFilter attribute in
the Group Policy container to determine if a WMI filter is applied to the GPO. If so, the WMI filter,
which contains one or more WQL statements, is evaluated. If the statement evaluates to true, then
the GPO is processed. There are tradeoffs in using WMI filters because they can increase the
amount of time it takes to process policy especially if the filter to be evaluated takes a long time to
process.
Top of page
WMI Filtering Scenarios
Sample uses of WMI filters include:

• Services. Computers where DHCP is turned on.

• Registry. Computers that have this registry key populated.

• Hardware inventory. Computers with a Pentium III processor.

• Software inventory. Computers with Visual Studio .NET installed.

• Hardware configuration. Computers with network interface cards (NICs) on interrupt level 3.

• Software configuration. Computers with multi-casting turned on.


Associations. Computers that have any services dependent on Systems Network Architecture

(SNA) service.
Client support for WMI filters exists only on Windows XP, Windows Server 2003, and later operating
systems. Windows 2000 clients will ignore any WMI filter and the GPO is always applied, regardless
of the WMI filter. WMI filters are only available in domains that have at least one Windows Server
2003 domain controller.
Top of page
Application of Group Policy
Application of Group Policy involves a series of processes, beginning with user and computer logon.
Top of page
Initial Processing of Group Policy
Group Policy for computers is applied at computer startup. For users, Group Policy is applied when
they log on. In Windows 2000, the processing of Group Policy is synchronous, which means that
computer Group Policy is completed before the logon dialog box is presented, and user Group Policy
is completed before the shell is active and available for the user to interact with it. As explained in
the following section, Windows XP with Fast Logon-enabled (which is the default setting) allows
users to log on while Group Policy is processed in the background.)
Top of page
Synchronous and Asynchronous Processing
Synchronous processes can be described as a series of processes where one process must finish
running before the next one begins. Asynchronous processes, on the other hand, can run on
different threads simultaneously because their outcome is independent of other processes.
You can change the default processing behavior by using a policy setting for each GPO so that
processing is asynchronous instead of synchronous. For example, if the policy has been set to
remove the Run command from the Start menu, it is possible under asynchronous processing that

16
a user could logon prior to this policy taking effect, so the user would initially have access to this
functionality.
Top of page
Fast Logon in Windows XP Professional
By default in Windows XP Professional, the Fast Logon Optimization feature is enabled for both
domain and workgroup members. This means that policy settings apply asynchronously when the
computer starts and when the user logs on. This process of applying policies is similar to a
background refresh process. As a result, users can logon and begin using the Windows shell faster
than they would with synchronous processing. Fast Logon Optimization is always off during logon
under the following conditions:

• When a user first logs on to a computer.


When a user has a roaming user profile or a home directory for logon

purposes.
• When a user has synchronous logon scripts.
Note that under the preceding conditions, computer startup can still be asynchronous. However,
because logon is synchronous under these conditions, logon does not exhibit optimization. The
following table compares policy processing of Windows 2000 and Windows XP client computers.
Default Policy Processing for Client Computers

Client Application at startup/log on Application at refresh

Windows 2000 Synchronous Asynchronous

Windows XP Professional Asynchronous Asynchronous


Windows XP clients support Fast Logon Optimization in any domain environment. Fast Logon
Optimization can be disabled with the following policy setting:
Computer Configuration\Administrative Templates\System\Logon\ Always wait for the
network at computer startup and logon.
Note that Fast Logon Optimization is not a feature of Windows Server 2003.
Top of page
Folder Redirection and Software Installation Policies
Note that when Fast Logon Optimization is on, a user might need to log on to a computer twice
before folder redirection policies and software installation policies are applied. This occurs because
the application of these types of policies requires the synchronous policy application. During a policy
refresh (which is asynchronous), the system sets a flag indicating that the application of folder
redirection or a software installation policy is required. The flag forces synchronous application of
the policy at the user's next logon.
Top of page
Time Limit for Processing of Group Policy
Under synchronous processing, there is a time limit of 60 minutes for all of Group Policy to finish
processing on the client computer. Any CSEs that are not finished after 60 minutes are signaled to
stop, in which case the associated policy settings might not be fully applied.
Top of page
Background Refresh of Group Policy
In addition to the initial processing of Group Policy at startup and logon, Group Policy is applied
subsequently in the background on a periodic basis. During a background refresh, a CSE will only

17
reapply the settings if it detects that a change was made on the server in any of its GPOs or its list
of GPOs.
In addition, software installation and folder redirection processing occurs only during computer
startup or user logon. This is because background processing could cause undesirable results. For
example, in software installation, if an application is no longer assigned, it is removed. If a user is
using the application while Group Policy tries to uninstall it or if an assigned application upgrade
takes place while someone is using it, errors would occur. Although the Scripts CSE is processed
during background refresh, the scripts themselves only run at startup, shutdown, logon, and logoff,
as appropriate.
Top of page
Periodic Refresh Processing
By default, Group Policy is processed every 90 minutes with a randomized delay of up to 30 minutes
— for a total maximum refresh interval of up to 120 minutes.
Group Policy can be configured on a per-extension basis so that a particular extension is always
processed during processing of policy even if the GPOs haven't changed. Policy settings for each
extension are located in Computer Configuration\Administrative Templates\System\Group
Policy.
Top of page
On-Demand Processing
You can also trigger a background refresh of Group Policy on demand from the client. However, the
application of Group Policy cannot be pushed to clients on demand from the server.
Top of page
Messages and Events
When Group Policy is applied, a WM_SETTINGCHANGE message is sent, and an event is signaled.
Applications that can receive window messages can use them to respond to a Group Policy change.
Those applications that do not have a window to receive the message (as with most services) can
wait for the event.
Top of page
Refreshing Policy from the Command Line
You can update or refresh Group Policy settings manually through a command line tool. On Windows
2000, you can use Secedit with the /refreshpolicy option; on Windows XP and Windows Server
2003, you can use Gpupdate.
Top of page
Group Policy and Slow Links
When Group Policy detects a slow link, it sets a flag to indicate to CSEs that a policy setting is being
applied across a slow link. Individual CSEs can determine whether or not to apply a policy setting
over the slow link. The default settings are as follows:
Default Slow Link Settings

Extension Default Setting

Security Settings On (and cannot be turned off)

Administrative Templates On (and cannot be turned off)

Software Installation Off

Scripts Off

18
Extension Default Setting

Folder Redirection Off


Top of page
Group Policy Loopback Support
Group Policy is applied to the user or computer, based on where the user or computer object is
located in Active Directory. However, in some cases, users might need policy applied to them, based
on the location of the computer object, not the location of the user object. The Group Policy
loopback feature gives you the ability to apply User Group Policy, based on the computer that the
user is logging onto. The following figure shows a sample site, domain, and OU structure and is
followed by a description of the changes that can occur with loopback processing.
Sample Active Directory Structure

Normal user Group Policy processing specifies that computers located in the Servers organizational
unit have the GPOs A3, A1, A2, A4, and A6 applied (in that order) during computer startup. Users of
the Marketing organizational unit have GPOs A3, A1, A2, and A5 applied (in that order), regardless
of which computer they log on to.
In some cases this processing order might not be what you want. An example is when you do not
want applications that have been assigned or published to the users of the Marketing organizational
unit to be installed while they are logged on to the computers in the Servers organizational unit.
With the Group Policy loopback feature, you can specify two other ways to retrieve the list of GPOs
for any user of the computers in the Servers organizational unit:
Merge mode. In this mode, the computer’s GPOs have higher precedence than the user's GPOs.

In this example, the list of GPOs for the computer is A3, A1, A2, A4, and A6, which is added to
the user's list of A3, A1, A2, A5, resulting in A3, A1, A2, A5, A3, A1, A2, A4, and 9A6 (listed in
lowest to highest priority).
Replace mode. In this mode, the user's list of GPOs is not gathered. Only the list of GPOs based

upon the computer object is used. In this example, the list is A3, A1, A2, A4, and A6.
The loopback feature can be enabled by using the User Group Policy loopback processing mode
policy under Computer Settings\Administrative settings\System\Group Policy.
The processing of the loopback feature is implemented in the Group Policy engine. When the Group
Policy engine is about to apply user policy, it looks in the registry for a computer policy, which
specifies which mode user policy should be applied in.

19
Top of page
How the Group Policy Engine Processes Client-Side Extensions
Client-side extensions are the components running on the client system that process and apply the
Group Policy settings to that system. There are a number of extensions that are pre-installed in
Windows Server 2003. Other Microsoft applications and third party application vendors can also
write and install additional extensions to implement Group Policy management of these applications.
The default Windows Server 2003 CSEs are listed in the following table:
Default Windows Server 2003 CSEs

Client-Side Extension Active Directory Component Sysvol Component

Software Installation PackageRegistration objects .aas files

Security Settings Gptmpl.inf

Folder Redirection fdeploy.ini

Scripts Scripts.ini

IP Security IPSec Policy objects

Internet Explorer Maintenance .ins + branding .inf files.

Administrative Templates Registry.pol and .adm files

Disk Quota Registry.pol

EFS Recovery Registry.pol

Remote Installation Oscfilter.ini

Wireless Network Policies Registry.pol and .adm files

QoS Packet Scheduler Registry.pol and .adm files


Top of page
Client-Side Extension Operation
CSEs are called by the Winlogon process at computer startup, user logon and at the Group Policy
refresh interval. CSEs are registered with Winlogon in the registry. This registration information
includes a DLL and a DLL entry point (function call) by which the CSE processing can be initiated.
The Winlogon process uses these to trigger Group Policy processing.
Each extension can opt not to perform processing at any of these points (for example, avoid
processing during background refresh).
Top of page
Client-Side Extensions Registered with WinLogon
Each of the CSEs is registered under the following key:
HKEY_LOCAL_MACHINE\Software\Microsoft\WindowsNT\CurrentVersion\Winlogon\GPE
xtensions
Each extension is identified by a key named after the GUID of the extension. These extensions are
shown in the following table.
CSE Extensions

Extension GUID Extension Name

25537BA6-77A8-11D2-9B6C-0000F8080861 Folder Redirection

20
Extension GUID Extension Name

35378EAC-683F-11D2-A89A-00C04FBBCFA2 Administrative Templates Extension

3610EDA5-77EF-11D2-8DC5-00C04FA31A66 Disk Quotas

426031c0-0b47-4852-b0ca-ac3d37bfcb39 QoS Packet Scheduler

42B5FAAE-6536-11D2-AE5A-0000F87571E3 Scripts

827D319E-6EAC-11D2-A4EA-00C04F79F83A Security

A2E30F80-D7DE-11d2-BBDE-00C04F86AE3B Internet Explorer Maintenance

B1BE8D72-6EAC-11D2-A4EA-00C04F79F83A EFS Recovery

C6DC5466-785A-11D2-84D0-00C04FB169F7 Software Installation

E437BC1C-AA7D-11D2-A382-00C04F991E27 IP Security
Top of page
How Group Policy Processing History Is Maintained on the Client
Computer
Each time GPOs are processed, a record of all of the GPOs applied to the user or computer is written
to the registry. GPOs applied to the local computer are stored in the following registry path:
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Group
Policy\History
GPOs applied to the currently logged on user are stored in the following registry path:
HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Group
Policy\History
Top of page
Preferences and Policy Configuration
Manipulating these registry values directly is not recommended. Most of the items in which you
might need to change the behavior of an extension (such as forcing a CSE to run over a slow link),
are available as Group Policy settings. These can be found in the Group Policy Object Editor in the
following location:
Computer Settings\Administrative Templates\System\Group Policy
The behavior can be changed for the following CSEs:
Administrative Templates (Registry-based

policy)
• Internet Explorer Maintenance

• Software Installation

• Folder Redirection

• Scripts

• Security

• IP Security

• EFS recovery

• Disk Quotas
Top of page
Order of Extension Processing

21
Administrative Templates policy settings are always processed first. Other extensions are processed
in an indeterminate order.
Top of page
Policy Application Processes
There are two primary milestones that the Group Policy engine uses for GPO processing:

• Creating the list of GPOs targeted at the user or computer.


Invoking the relevant CSEs to process the policy settings relevant to them within the GPO

list.
The following figure shows the steps required to reach the first milestone in GPO processing, GPO
list creation.
GPO List Creation

Creating the GPO list involves the following steps:


1. Query the Active Directory for the gPLink and gPOptions properties in the Site and Domain
hierarchies to which the user or computer object belongs.
2. Query the Active Directory for the GroupPolicyContainer objects referenced in the gPLink
properties.
3. Evaluate security filtering to determine if the user or computer have the Apply Group Policy
access permission to the GPO.
4. Evaluate the WMI query against the WMI repository on the client computer to determine if the
computer meets the query requirements.
Once the GPO list is created, the Group Policy engine and the CSEs work together to process Group
Policy template components. The following figure shows the steps required to determine which CSEs
to call.

22
Determining CSEs to Call

Determining which CSEs to call involves the following steps:


1. Retrieve the list of CSEs registered with Winlogon.
2. Check to see whether it is appropriate to run a particular CSE (for example, whether
background processing or slow link processing is enabled for the extension).
3. Check the CSE history against list of Applied GPOs. GPOs with new version numbers and GPOs
that have settings relevant to the CSE (that is, they have the CSE extension GUID in the Group
Policy container gpcUserExtension or gpcMachineExtension properties) are added to the
Changed GPO List. GPOs no longer in the Applied GPO List are added to the Deleted GPO List.
4. Check to see whether the appropriate CSE should be processing policy settings for the user or
the computer.
5. Check the version number listed in the GPO against its recorded version history in the registry
to determine whether the GPO needs reprocessing.
If all of the version numbers are unchanged the MaxNoGPOListChanges interval might have expired;
if so, the CSE processes policy settings without regard to an unchanged version number.
Steps 3 through 5 are repeated by each CSE for all GPOs in the GPO list. After one CSE is done, the
next CSE that needs to run repeats the entire process.

23
Group Policy updates are dynamic and occur at specific intervals. If there have been no changes to
Group Policy, the client computer still refreshes the security policy settings at regular intervals for
the GPO.
If no changes are discovered, GPOs are not processed. Security policies have a periodic force apply
every 16 hours. For security policies, there is a value that sets a maximum limit of how long a client
can function without reapplying non-changed GPOs. By default, this setting is every 16 hours plus a
randomized delay of up to 30 minutes. Even when GPOs that contain security policy settings do not
change, the policy is reapplied every 16 hours
Top of page
Group Policy Replication
In a domain that contains more than one domain controller, Group Policy information takes time to
propagate, or replicate, from one domain controller to another. Low bandwidth network connections
between domain controllers slow replication. The Group Policy infrastructure has mechanisms to
manage these issues.
Each GPO is stored partly in the Sysvol on the domain controller and partly in Active Directory.
GPMC and Group Policy Object Editor present and manage the GPO as a single unit. For example,
when you set permissions on a GPO in GPMC, GPMC is actually setting permissions on objects in
both Active Directory and the Sysvol. It is not recommended that you manipulate these separate
objects independently outside of GPMC and the Group Policy Object Editor. As shown in the
following figure, it is important to understand that these two separate components of a GPO rely on
different replication mechanisms. The file system portion is replicated through FRS, independently
of the replication handled by Active Directory. Only the Sysvol subfolder (%systemroot
%\SYSVOL\sysvol) is shared and replicated. Sysvol was designed to allow multiple domain’s Sysvols
to be replicated in the same tree — each domain’s Sysvol is contained under a subfolder of the
Sysvol share. For the current domain, a copy of the domain’s Sysvol subtree is also stored directly
under the %systemroot%\SYSVOL\domain folder.
Group Policy Replication

FRS is a multi-master replication service that synchronizes folders between two or more Windows
Server 2003 or Windows 2000 systems. Modified files are queued for replication at the point the file
is closed. In the case of conflicting modifications between two copies of an FRS replica, the file with

24
the latest modification time will overwrite any other copies. This is referred to as a “last-writer-wins”
model.
FRS replication topology configuration is stored as a combination of child objects of each FRS replica
partner (in the FRS Subscriptions subcontainer) and objects within another hidden subcontainer of
the domain System container. Replication links between systems are maintained as FRS
subscription objects. These objects specify the replica partner and the replication schedule. It is
possible to view the schedule by browsing to an FRS subscription object and viewing the properties.
The replica partner is stored as the object GUID of the computer account of that partner.
The Sysvol folder is a special case of FRS replication. Active Directory automatically maintains the
subscription objects and their schedules as the directory replication is built and maintained. It is
possible, but not recommended, to modify the properties (for example, the schedule) of the Sysvol
subscription objects manually.
The FRS replication schedule only approximates to the directory replication schedule so it is possible
for the directory-based Group Policy information and the file-based information to get temporarily
out of synch. Since GPO version information is stored in both the Group Policy container object and
in the Group Policy template, any discrepancy can be viewed with tools such as Gpotool.exe and
Repladmin.exe.
For those Group Policy extensions that store data in only one data store (either Active Directory or
Sysvol), this is not an issue, and Group Policy is applied as it can be read. Such extensions include
Administrative Templates, Scripts, Folder Redirection, and most of the Security Settings.
For any Group Policy extension that stores data in both storage places (Active Directory and
Sysvol), the extension must properly handle the possibility that the data is unsynchronized. This is
also true for extensions that need multiple objects in a single store to be atomic in nature, since
neither storage location handles transactions.
An example of an extension that stores data in Active Directory and Sysvol is Group Policy Software
installation extension. The .aas files are stored on Sysvol and the Windows Installer package
definition is in Active Directory. If the .aas file exists, but the corresponding Active Directory
components are not present, the software is not installed. If the .aas file is missing, but the package
is known in Active Directory, application installation fails gracefully and will be retried on the next
processing of Group Policy.
The tools used to manage Active Directory and Group Policy, such as GPMC, the Group Policy Object
Editor, and Active Directory Users and Computers all communicate with domain controllers. If there
are several domain controllers available, changes made to objects like users, computers,
organizational units, and GPOs might take time to appear on other domain controllers. The
administrator might see different data depending on the last domain controller on which changes
were made and which domain controller they are currently viewing the data from.
If multiple administrators manage a common GPO, it is recommended that all administrators use
the same domain controller when editing a particular GPO, to avoid collisions in FRS. Domain
Admins can use a policy to specify how Group Policy chooses a domain controller — that is, they can
specify which domain controller option should be used. The Group Policy domain controller selection
policy setting is available in the Administrative Templates node for User Configuration, in the
System\Group Policy subcontainer.
Top of page
Network Ports Used by Group Policy
Port Assignments for Group Policy

25
Service Name UDP TCP

Lightweight Directory Access Protocol n/a 389

SMB n/a 445

DCOM Dynamicallly assigned Dynamically assigned

RPC Dynamically assigned Dynamically assigned


Top of page

How Domains and Forests Work


Updated: March 28, 2003
How Domains and Forests Work
In this section

• Domain and Forest Structure

• Active Directory Objects

• Domain and Forest Protocols and Programming Interfaces

• Domains and Forests Processes and Interactions

• Network Ports used by Domains and Forests

• Related Information
Active Directory stores data for an entire forest. A forest is a distributed database, which is made up
of directory partitions spread across multiple computers. A domain is one partition of the database;
each domain contains Active Directory objects, such as security principal objects (users, computers,
and groups) to which you can grant or deny access to network resources. All domain data stored in
the domain directory partition is replicated to domain controllers in that domain only.
The directory partitions that store configuration and schema information are replicated to domain
controllers in all domains. In this way, Active Directory provides a data repository that is logically
centralized but physically distributed. Because all domain controllers store forest-wide configuration
and schema information, a domain controller in one domain can reference a domain controller in
any other domain if the information that it is requesting is not stored locally.
This section details how domains and forests work, discusses the various components of domains
and forests, describes several common processes that depend on domains and forests, and lists the
network ports related to domains and forests.
Top of page
Domain and Forest Structure
A forest consists of a hierarchical structure of domain containers that are used to categorically store
information about objects on the network. Domain containers are considered the core functional
units in the forest structure. This is because each domain container in a forest is used primarily to
store and manage Active Directory objects, most of which have a physical representation (such as
people, printers, or computers). Forests also provide the structure by which domain containers can
be segregated into one or more unique Domain Name System (DNS) namespace hierarchies known
as domain trees.

26
In addition, the domain tree hierarchy is based on trust relationships — that is, the domain
containers are linked by intra-forest trust relationships. When it is necessary for domain containers
in the same organization to have different namespaces, you can create a separate tree for each
namespace. In Active Directory, the roots of trees are linked automatically by two-way, transitive
trust relationships. Trees linked by trust relationships form a forest. A single tree that is related to
no other trees constitutes a forest of one tree. The domain and forest structure is made up of the
following components:

• Cross-References

• Trust Relationships

• Forest Root
Domain Trees and Child

Domains
• Domain Names
For more information about Active Directory Domains and DNS, see “How DNS Support for Active
Directory Works.”
This section describes the structure and function of these components, and describes how this
structure helps administrators manage the network so that users can accomplish business
objectives.

Cross-References
Cross-references enable every domain controller to be aware not only of the partitions that it holds,
but of all directory partitions in the forest. The information contained within cross-references form
the glue that holds the pieces of the domain and forest structure together. Because Active Directory
is logically partitioned, and directory partitions are the discrete components of the directory that
replicate between domain controllers, either all objects in a directory partition are present on a
particular domain controller or no objects in the directory partition are present on the domain
controller. For this reason, cross references have the effect of linking the partitions together, which
allows operations such as searches to span multiple partitions.
Cross-references are stored as directory objects of the class crossRef that identify the existence
and location of all directory partitions, irrespective of location in the directory tree. In addition,
these objects contain information that Active Directory uses to construct the directory tree
hierarchy. Values for the following attributes are required for each cross-reference:
nCName. The distinguished name of the directory partition that the crossRef object references.

(The prefix nC stands for naming context, which is a synonym for directory partition.) The
combination of all of the nCName properties in the forest defines the entire directory tree,
including the subordinate and superior relationships between partitions.
dNSRoot. The DNS name of the domain where servers that store the particular directory

partition can be reached. This value can also be a DNS host name.
How Cross-Reference Information is Propagated Throughout the Domain and Forest
Structure
For every directory partition in a forest, there is an internal cross-reference object stored in the
Partitions container (cn=Partitions,cn=Configuration,dc=ForestRootDomain). Because cross-
reference objects are located in the Configuration container, they are replicated to every domain
controller in the forest, and thus every domain controller has information about the name of every
partition in the forest. By virtue of this knowledge, any domain controller can generate referrals to
any other domain in the forest, as well as to the schema and configuration directory partitions.

27
When you create a new forest, the Active Directory Installation Wizard creates three directory
partitions: the first domain directory partition, the configuration directory partition, and the schema
directory partition. For each of these partitions, a cross-reference object is created automatically.
Thereafter, when a new domain is created in the forest, another directory partition is created and
the respective cross-reference object is created. When the configuration directory partition is
replicated to the new domain controller, a cross-reference object is created on the domain naming
master and is then replicated throughout the forest.
Note
The state of cross-reference information at any specific time is subject to the effects of replication

latency.
For more information about cross-reference objects, see “How Active Directory Searches Work.”
Cross-reference objects can also be used to generate referrals to other directory partitions located
in another forest through external cross-references.
External Cross-References
An external cross-reference is a cross-reference object that can be created manually to provide the
location of an object that is not stored in the forest. If your Lightweight Directory Access Protocol
(LDAP) clients submit operations for an external portion of the global LDAP namespace against
servers in your forest, and you want servers in your forest to refer the client to the correct location,
you can create a cross-reference object for that directory in the Partitions container. There are two
ways that external cross-references are used:
To reference external directories by their disjoint directory name (a name that is not contiguous

with the name of this directory tree). In this case, when you create the cross-reference, you
create a reference to a location that is not a child of any object in this directory.
To reference external directories by a name that is within the Active Directory namespace (a

name that is contiguous with the name of this directory tree). In this case, when you create the
cross-reference, you create a referenceto a location that is a child of a real object in this
directory.
Because the domain component (dc=) portion of the distinguished names of all Active Directory
domains matches their DNS addresses, and because DNS is the worldwide namespace, all domain
controllers can generate external referrals to each other automatically.

Trust Relationships
Active Directory provides security across multiple domains through intra-forest trust relationships.
When there are trust relationships between domains in the same forest, the authentication
mechanism for each domain trusts the authentication mechanism for all other trusted domains. If a
user or application is authenticated by one domain, its authentication is accepted by all other
domains that trust the authenticating domain. Users in a trusted domain have access to resources
in the trusting domain, subject to the access controls that are applied in the trusting domain.
Note
Default intra-forest trust relationships are created at the time the domains are created. The

number of trust relationships required to connect n domains is n–1, whether the domains are
linked in a single, contiguous parent-child hierarchy or constitute two or more separate
contiguous parent-child hierarchies.
A trust relationship is a relationship established between two domains that allows users in one
domain to be recognized by a domain controller in the other domain. Trusts let users access
resources in the other domain, and also let administrators manage user rights for users in the other
domain. Account authentication between domains is enabled by two-way, transitive trust

28
relationships. All domain trusts in an Active Directory forest are two-way and transitive and are
have the following attributes:
Two-way. When you create a new child domain, the child domain automatically trusts the parent

domain, and vice versa. At the practical level, this means that authentication requests can be
passed between the two domains in both directions.
Transitive. A transitive trust reaches beyond the two domains in the initial trust relationship. For

example, if Domain A and Domain B (parent and child) trust each other, and if Domain B and
Domain C (also parent and child) trust each other, Domain A and Domain C also trust each other
(implicitly), even though no direct trust relationship between them exists.
At the level of the forest, a trust relationship is created automatically between the forest root
domain and the root domain of each domain tree added to the forest, with the result that complete
trust exists between all domains in an Active Directory forest. At the practical level, because trust
relationships are transitive, a single logon process lets the system authenticate a user (or
computer) in any domain in the forest. As shown in the following figure, this single logon process
lets the account access resources on any domain in the forest.
Transitive Trusts Facilitate Cross-Domain Access to Resources With a Single Logon

Note
Single logons enabled by trusts do not necessarily imply that the authenticated user has rights

and permissions in all domains in the forest. This is because in any discussion of trust

29
relationships, access to resources always assumes the limitations of access control.

Forest Root Domain


The first domain created in the forest is called the forest root domain. When you create a new tree,
you specify the root domain of the initial tree, and a trust relationship is established between the
root domain of the second tree and the forest root domain. If you create a third tree, a trust
relationship is established between the root domain of the third tree and the forest root domain.
Because all trust relationships created within a forest are transitive and two-way, the root domain of
the third tree also has a two-way trust relationship with the root domain of the second tree. In
Windows 2000 Active Directory, the forest root domain cannot be deleted, changed, or renamed. In
Windows Server 2003 Active Directory, the forest root domain cannot be deleted, but it can be
restructured or renamed.
The distinguished name of the forest root domain is used to locate the configuration and schema
directory partitions in the namespace. The distinguished names for the configuration and schema
containers in Active Directory always show these containers as child objects in the forest root
domain. For example, in the child domain Noam.wingtiptoys.com, the distinguished name of the
Configuration container is cn=configuration,dc=wingtiptoys,dc=com. The distinguished name of the
Schema container is cn=schema,cn=configuration,dc=wingtiptoys,dc=com. However, this naming
convention provides only a logical location for these containers.
The containers do not exist as child objects of the forest root domain, nor is the schema directory
partition actually a part of the configuration directory partition. They are separate directory
partitions. Every domain controller in a forest stores a copy of the configuration and schema
directory partitions, and every copy of these partitions has the same distinguished name on every
domain controller. The following operations occur when you create the forest root domain:

• The Schema container and the Configuration container are created.


The Active Directory Installation Wizard assigns the PDC emulator, RID master, domain naming

master, schema master, and infrastructure master roles to the domain controller.
The Enterprise Admins and Schema Admins groups are located in this domain. By default, members
of these two groups have forest-wide administrative credentials.

Domain Trees and Child Domains


A forest is a collection of one or more domain trees, organized as peers and connected by two-way,
transitive trust relationships. A single domain constitutes a tree of one domain, and a single tree
constitutes a forest of one tree. Thus, a forest is synonymous with Active Directory — that is, the
set of all directory partitions in a particular directory service instance (which includes all domains
and all configuration and schema information) makes up a forest.
Trees in the same forest do not form a contiguous namespace. They form a noncontiguous
namespace that is based on different DNS root domain names. However, trees in a forest share a
common directory schema, configuration, and global catalog. (The global catalog is a domain
controller that stores all objects of all domains in an Active Directory forest, which makes it possible
to search for objects at the forest level rather than at the tree level.) This sharing of common
schema and configuration data, in addition to trust relationships between their roots, distinguishes a
forest from a set of unrelated trees. Although the roots of the separate trees have names that are
not contiguous with each other, the trees share a single overall namespace because names of
objects can still be resolved by the same Active Directory instance.
Note

30
The directory schema and configuration data are shared because they are stored in separate

logical directory partitions that are replicated to domain controllers in every domain in the forest.
The data about a particular domain is replicated only to domain controllers in the same domain.
Domain Trees
A domain tree is a DNS namespace: it has a single root domain and is built as a strict hierarchy;
each domain below the root domain has exactly one superior, or parent, domain. The namespace
created by this hierarchy, therefore, is contiguous — each level of the hierarchy is directly related to
the level above it and to the level below it, if any. In Active Directory, the following rules determine
the way that trees function in the namespace:
A tree has exactly one name. The name of the tree is the DNS name of the domain at the root of

the tree.
The names of domains created beneath the root domain (child domains) are always contiguous

with the name of the tree root domain.
The DNS names of the child domains of the tree root domain reflect this organization; therefore,

the children of a tree root domain called Somedomain are always children of that domain in the
DNS namespace (for example, Child1.somedomain, Child2.somedomain, and so forth).
Note
Tree and forest hierarchies are specific to Active Directory domains. A Windows NT 4.0 domain

that is configured to trust or to be trusted by a Active Directory domain is not part of the forest to
which the Active Directory domain belongs.
The forest structure provides organizations with the option of constructing their enterprise from
separate, distinct, noncontiguous namespaces. Having a separate namespace is desirable under
conditions where, for example, the namespace of an acquired company should remain intact. If you
have business units with distinct DNS names, you can create additional trees to accommodate the
names.
Note
Before creating a new domain tree when you want a different DNS namespace, consider creating

another forest. Multiple forests provide administrative autonomy, isolation of the schema and
configuration directory partitions, separate security boundaries, and the flexibility to use an
independent namespace design for each forest.
When you create a new domain tree, you specify the root domain of the initial tree, and a trust
relationship is established between the root domain of the new tree (the second tree) and the forest
root domain. If you create a third tree, a trust relationship is established between the root domain
of the third tree and the forest root domain. Because a trust relationship is transitive and two-way,
the root domain of the third tree also has a two-way trust relationship with the root domain of the
second tree. The following operations occur when you create a new tree root domain in an existing
forest:
Location of a source domain controller in the forest root domain and synchronization of domain

system time with the system time of the source domain controller.
Creation of a tree-root trust relationship between the tree root domain and the forest root

domain, and creation of the trusted domain object (TDO) in both domains. The tree-root trust
relationship is two-way and transitive.
Child Domains
Child domains can represent geographical entities (for example, the United States and Europe),
administrative entities within the organization (for example, sales and marketing departments), or
other organization-specific boundaries, according to the needs of the organization. Domains are
created below the root domain to minimize Active Directory replication and to provide a means for

31
creating domain names that do not change. Changes in the overall domain architecture, such as
domain collapses and domain re-creation, create difficult and potentially IT-intensive support
requirements. A good namespace design should be capable of withstanding reorganizations without
the need to restructure the existing domain hierarchy.
Each time you create a new child domain, a two-way transitive trust relationship (known as the
parent-child trust) is automatically created between the parent and new child domain. In this way,
transitive trust relationships flow upward through the domain tree as it is formed, creating transitive
trusts between all domains in the domain tree. The parent-child relationship is a naming and trust
relationship only. Administrators in a parent domain are not automatically administrators of a child
domain. Likewise, policies set in a parent domain do not automatically apply to child domains.
The following operations occur when you create a child domain in an existing tree:

• Verification of the name that you provide as a valid child domain name.
Location of a source domain controller in the parent domain and synchronization of the system

time of the child domain with the system time of the source domain controller.
Creation of parent-child TDOs in the System folder on both the parent domain and the child

domain. The TDO objects identify two-way transitive trust relationships between the child domain
and the parent domain.
Replication of the Active Directory Schema container and the Configuration container from a

domain controller in the parent domain.

Domain Names
Active Directory uses DNS naming standards for hierarchical naming of Active Directory domains
and computers. For this reason, domain and computer objects are part of both the DNS domain
hierarchy and the Active Directory domain hierarchy. Although these domain hierarchies have
identical names, they represent separate namespaces.
The domain hierarchy defines a namespace. A namespace is any bounded area in which
standardized names can be used to symbolically represent some type of information (such as an
object in a directory or an Internet Protocol [IP] address) and that can be resolved to the object
itself. In each namespace, specific rules determine how names can be created and used. Some
namespaces, such as the DNS namespace and the Active Directory namespace, are hierarchically
structured and provide rules that allow the namespace to be partitioned. Other namespaces, such
as the Network Basic Input/Output System (NetBIOS) namespace, are flat (unstructured) and
cannot be partitioned.
The main function of DNS is to map user-readable computer names to computer-readable IP
addresses. Thus, DNS defines a namespace for computer names that can be resolved to IP
addresses, or vice versa. In Windows NT 4.0 and earlier, DNS names were not required; domains
and computers used NetBIOS names, which were mapped to IP addresses by using the Windows
Internet Name Service (WINS). Although DNS names are required for Active Directory domains and
Windows 2000–, Windows XP–, and Windows Server 2003–based computers, NetBIOS names also
are supported in Windows Server 2003 for interoperability with Windows NT 4.0 domains and with
clients that are running Windows NT 4.0 or earlier, Windows for Workgroups, Windows 98, or
Windows 95.
Note
WINS and NetBIOS are not required in an environment where computers run only Windows 2000,

Windows XP or Windows Server 2003, but WINS is required for interoperability between
Windows 2000 Server– and Windows Server 2003–based domain controllers, computers that are

32
running earlier versions of Windows, and applications that depend on the NetBIOS namespace —
for example, applications that call NetServerEnum and other so-called Net* application
programming interfaces (APIs) that depend on NetBIOS.
Active Directory domains have both DNS names and NetBIOS names. In general, both names are
visible to end users. The DNS names of Active Directory domains include two parts, a prefix and a
suffix. The DNS prefix is the first label in the DNS name of the domain. The suffix is the name of the
Active Directory forest root domain. For example, the first label of the DNS name for the
Child1.wingtiptoys.com domain is Child1 and is referred to as the DNS prefix. The name of the
forest root domain in that same forest is wingtiptoys.com and is referred to as the DNS suffix.
Active Directory Domain Names and the Internet
Active Directory domain names can exist within the scope of the global Internet DNS namespace.
When an Internet presence is required by an individual or organization, the Active Directory
namespace is maintained as one or more hierarchical Windows 2000 domains beneath a root
domain that is registered as a DNS namespace. Registration of individual and organizational root
domain DNS names ensures the global uniqueness of all DNS names and provides for the
assignment of network addresses that are recorded in the global DNS database. Registration of the
DNS name for the root domain of the individual or organization also grants that individual or
organization the authority to manage its own hierarchy of child domains, zones, and hosts within
the root domain.
Note
An organization might or might not choose to be part of the global Internet DNS namespace.

However, even if the root domain of the organization is not registered as an Internet DNS
namespace, the DNS service is required to locate Windows 2000–, Windows XP– and Windows
Server 2003–based computers in general and Windows 2000 Server– and Windows Server 2003–
based domain controllers in particular.
For more information about domain names, see “How DNS Support for Active Directory Works.”
Top of page
Active Directory Objects
Active Directory objects represent the entities that make up a network. An object is a distinct,
named set of attributes that represents something concrete, such as a user, a printer, or an
application. When you create an Active Directory object, Active Directory generates values for some
of the attributes of the object; others you provide. For example, when you create a user object,
Active Directory assigns a globally unique identifier (GUID) and a security identifier (SID), and you
provide values for such attributes of the user as the given name, surname, logon identifier, and so
on. This section describes the key identifiers assigned to objects by Active Directory and their
associated naming schemes

Object Uniqueness
Each object in Active Directory is associated with at least one identifier that identifies that object as
unique in an enterprise. This object identifier is referred to as a globally unique identifier (GUID).
Another identifier, referred to as a security ID (SID), is used on security-enabled objects so that
authentication and authorization services can determine its origin and validity within the domain or
forest. Active Directory objects that are security-enabled are referred to as security principals.
A security principal is an object managed by Active Directory that is automatically assigned a SID
for logon authentication and for access to resources. A security principal can be a user account,
computer account, or a group, and is unique within a single domain. A security principal object must

33
be authenticated by a domain controller in the domain in which the security principal object is
located, and it can be granted or denied access to network resources.
GUIDs
Every object in Active Directory has a GUID, a 128-bit number assigned by the Directory System
Agent when the object is created. The GUID, which cannot be altered or removed, is stored in an
attribute that is required for every object, not just security principal objects. The GUID of each
object is stored in its Object-GUID (objectGUID) property. When storing a reference to an Active
Directory object in an external store (for example, a Microsoft SQL Server database), the
objectGUID value should be used.
Active Directory uses GUIDs internally to identify objects. For example, the GUID is one of the
properties of an object that is published in the global catalog. Searching the global catalog for the
GUID of a User object will yield results if the user has an account somewhere in the enterprise. In
fact, searching for any object by Object-GUID might be the most reliable way of finding the object
you want to find. This is because the values of other object properties can change, but the Object-
GUID never changes. When an object is assigned a GUID, it always keeps that value.
Security IDs (SIDs)
When a new security principal is created, Active Directory stores the SID of the security principal in
the Object-SID (objectSID) property of the object and assigns the new object a GUID. Each security
principal (as well as the domain itself) has a SID, which is the property that authoritatively identifies
the object to the Windows security subsystem. The SID of a user, group, or computer is derived
from the SID of the domain to which the object belongs; this SID is made up of the value of the
domain SID and one additional 32-bit component called the relative identifier (RID).
In Windows 2000, Windows XP and Windows Server 2003 operating systems, ACLs are used to
identify users and groups by SID, not by GUID — even ACLs on resources in Active Directory. A user
gains access to, for example, a Group Policy object, based on one of the SIDs belonging to the user,
not based on the GUID for the User object.
SIDs and Migrations
When an employee moves to a different location or to a new position, their user account might need
to be moved or migrated to a different domain within the organization. Migrating a user account
from one domain to another replaces the SID of the account with a new SID and new RID assigned
by the new domain. For example, if Nicolette moves from North America to Europe, but stays in the
same company, her account can be transferred with her. An administrator with the appropriate
credentials can simply move her User object from, say, the Noam.wingtiptoys.com domain to the
Euro.wingtiptoys.com domain. When the account is moved, the User object for her account needs a
new SID. The domain identifier portion of a SID issued in Noam is unique to Noam, so the SID for
her account in Euro has a different domain identifier. The RID portion of a SID is unique relative to
the domain, so if the domain changes, the RID also changes.
Thus when a User object moves from one domain to another, a new SID must be generated for the
user account and stored in the Object-SID property. Before the new value is written to the property,
the previous value is copied to another property of a User object, SID History (SIDHistory). This
property can hold multiple values. Each time a User object moves to another domain, a new SID is
generated and stored in the Object-SID property and another value is added to the list of old SIDs
in SIDHistory. When a user logs on and is successfully authenticated, the domain authentication
service queries Active Directory for all of the SIDs associated with the user — the current SID of the
user, any old SIDs of the user, and the SIDs for the groups to which the user belonged. All of these

34
SIDs are returned to the authentication client and are included in the access token of the user.
When the user tries to gain access to a resource, any one of the SIDs in the access token, including
one of the SIDs in SIDHistory, can be used to authorize the user to the resource.
For more information about SIDHistory, see “How Security Identifiers Work” and “Security
Considerations for Trusts.”

Object Names
Object names are used to identify accounts in an Active Directory network. Objects have both LDAP-
related names and logon names. Each object name represents a unique attribute for that object.
LDAP-Related Names
Active Directory is a Lightweight Directory Access Protocol (LDAP)-compliant directory service. In
the Windows 2000 Server and Windows Server 2003 operating systems, all access to Active
Directory objects occurs through LDAP. LDAP defines what operations can be performed in order to
query and modify information in a directory, and how information in a directory can be accessed in
compliance with established security requirements. Therefore, you can use LDAP to find or
enumerate directory objects and to query or administer Active Directory.
It is possible to query by LDAP distinguished name (which is itself an attribute of the object), but
because these names are difficult to remember, LDAP also supports querying by other attributes
(for example, by using the attribute “color” to find “color” printers). This lets you find an object
without having to know the distinguished name. If your organization has several domains, it is
possible to use the same user name or computer name in different domains.
Like some other types of object names, LDAP-related names can change. The SID, globally unique
ID, LDAP distinguished name, and canonical name generated by Active Directory uniquely identify
each user, computer, or group in the forest. If the security principal object is renamed or moved to
a different domain, the SID, LDAP relative distinguished name, LDAP distinguished name, and
canonical name change, but the globally unique ID generated by Active Directory does not change.
Top of page
Distinguished Name
Objects are located within Active Directory domains according to a hierarchical path, which includes
the labels of the Active Directory domain name and each level of container objects. The full path to
the object is defined by the distinguished name (also known as a DN). The name of the object itself,
separate from the path to the object, is defined by the relative distinguished name.
The distinguished name is unambiguous (identifies one object only) and unique (no other object in
the directory has this name). By using the full path to an object, including the object name and all
parent objects to the root of the domain, the distinguished name uniquely and unambiguously
identifies an object within a domain hierarchy. It contains sufficient information for an LDAP client to
retrieve information the about the object from the directory.
For example, a user named Samantha Smith works in the marketing department of a company as a
promotions coordinator. Therefore, her user account is created in an organizational unit that stores
the accounts for marketing department employees who are engaged in promotional activities. The
user identifier for Samantha Smith is Ssmith, and she works in the North American branch of the
company. The root domain of the company is wingtiptoys.com, and the local domain is
noam.wingtiptoys.com. The following distinguished name is of the user object Ssmith in the
noam.wingtiptoys.com domain.
cn=Ssmith,ou=Promotions,ou=Marketing,dc=noam,dc=wingtiptoys,dc=com
Note

35
Active Directory tools do not display the LDAP abbreviations for the naming attributes domain

component (dc=), organizational unit (ou=), common name (cn=), and so forth. These
abbreviations are shown only to illustrate how LDAP recognizes the portions of the distinguished
name. Most Active Directory tools display object names in canonical form, as described later in
this section. Because distinguished names are difficult to remember, it is useful to have other
means for retrieving objects. Active Directory supports querying by attribute (for example, the
building number where you want to find a printer), so an object can be found without having to
know the distinguished name.
Top of page
Relative Distinguished Name
The relative distinguished name of an object is the part of the name that is an attribute of the
object itself — the part of the object name that identifies this object as unique from its siblings at its
current level in the naming hierarchy. Using the distinguished name mentioned earlier, the relative
distinguished name of the object is SSmith. The relative distinguished name of the parent object is
Promotions. The maximum length allowed for a relative distinguished name is 255 characters, but
attributes have specific limits imposed by the directory schema. For example, in the case of the
common name (cn), which is the attribute type often used for naming the relative distinguished
name, the maximum number of characters allowed is 64.
Active Directory relative distinguished names are unique within a specific parent — that is, Active
Directory does not permit two objects with the same relative distinguished name under the same
parent container. However, two objects can have identical relative distinguished names but still be
unique in the directory because within their respective parent containers, their distinguished names
are not the same. (For example, the object cn=SSmith,dc=noam,dc=wingtiptoys,dc=com is
recognized by LDAP as being different from cn=SSmith,dc=wingtiptoys,dc=com.)
The relative distinguished name for each object is stored in the Active Directory database. Each
record contains a reference to the parent of the object. By following the references to the root, the
entire distinguished name is constructed during an LDAP operation.
Top of page
Canonical Name
By default, the user interface in Windows 2000, Windows XP, and Windows Server 2003 displays
object names that use the canonical name, which lists the relative distinguished names from the
root downward and without RFC 1779 naming attribute descriptors; it uses the DNS domain name
(the form of the name where domain labels are separated by periods). For the LDAP distinguished
name in the previous example, the respective canonical name would appear as follows:
noam.wingtiptoys.com/marketing/promotions/ssmith
Note
If the name of an organizational unit contains a forward slash character (/), Active Directory

requires an escape character in the form of a backslash (\) to distinguish between forward
slashes that separate elements of the canonical name and the forward slash that is part of the
organizational unit name. The canonical name that appears in Active Directory Users and
Computers properties pages displays the escape character immediately preceding the forward
slash in the name of the organizational unit. For example, if the name of an organizational unit is
Promotions/Northeast and the name of the domain is Wingtiptoys.com, the canonical name is
displayed as Wingtiptoys.com/Promotions\/Northeast.
Logon Names

36
A unique logon name is required for user security principals to gain access to a domain and its
resources. Security principals are objects to which Windows security is applied in the form of
authentication and authorization. Users are security principals, and they are authenticated (their
identity is verified) at the time they log on to the domain or local computer. They are authorized
(allowed or denied access) when they use resources.
In the Windows 2000, Windows XP and Windows Server 2003 operating systems, user security
principals require a unique logon name to gain access to a domain and its resources. Security
principal objects might be renamed, moved, or contained within a nested domain hierarchy. The
names of security principal objects must conform to the following guidelines:
The name cannot be identical to any other user, computer, or group name in the domain. It can

contain up to 20 uppercase or lowercase characters except for the following:" / \ [ ] : ; | = , +
* ? <>
• A user name, computer name, or group name cannot consist solely of periods (.) or spaces.
The two types of logon names are user principal name and Security Account Manager account
names.
User Principal Name
In Active Directory, each user account has a user principal name (UPN) in the format user@DNS-
domain-name. A UPN is a friendly name assigned by an administrator that is shorter than the LDAP
distinguished name used by the system and easier to remember. The UPN is independent of the DN
of the user object, so a user object can be moved or renamed without affecting the user logon
name. When logging on using a UPN, users do not have to choose a domain from a list on the logon
dialog box.
The three parts of a UPN are the UPN prefix (user logon name), the @ character, and the UPN suffix
(usually a domain name). The default UPN suffix for a user account is the DNS name of the Active
Directory domain where the user account is located. For example, the UPN for user Frank Miller,
who has a user account in the Wingtiptoys.com domain (if Wingtiptoys.com is the only domain in
the tree), is FMiller@Wingtiptoys.com. The UPN is an attribute (userPrincipalName) of the security
principal object. If the userPrincipalName attribute of a User object has no value, the User object
has a default UPN of userName@DnsDomainName.
If your organization has many domains forming a deep domain tree organized by department and
region, default UPN names can become unwieldy. For example, the default UPN for a user might be
Sales.westcoast.microsoft.com. The logon name for a user in that domain is
user@sales.westcoast.microsoft.com. Instead of accepting the default DNS domain name as the
UPN suffix, you can simplify both administration and user logon processes by providing a single UPN
suffix for all users. (The UPN suffix is used only within the Active Directory domain and is not
required to be a valid DNS domain name.) You can choose to use your e-mail domain name as the
UPN suffix — userName@microsoft.com. This gives the user in the example the UPN name of
user@microsoft.com.
For a UPN–based logon, a global catalog might be necessary, depending on the user logging on and
the domain membership of the computer where the user logs on. A global catalog is needed if the
user logs on with a non-default UPN and the computer account of the user is in a different domain
than the user account of the user. For example, if, instead of accepting the default DNS domain
name as the UPN suffix (in the example user@sales.westcoast.microsoft.com), you provide a single
UPN suffix for all users (so that the user then becomes simply user@microsoft.com), a global
catalog is required for logon.

37
You use the Active Directory Domains and Trusts tool to manage UPN suffixes for a domain. UPNs
are assigned at the time a user is created. If you have created additional suffixes for the domain,
you can select from the list of available suffixes when you create the user or group account. The
suffixes appear in the list in the following order:
Alternate suffixes (if any; last one created appears

first)
• Root domain

• The current domain


SAM Account Name
A Security Accounts Manager (SAM) account name is required for compatibility with Windows NT
3.x and Windows NT 4.0 domains. The user interface in Windows 2000, Windows XP, and Windows
Server 2003 refers to the SAM account name as User logon name (pre-Windows 2000).
SAM account names are sometimes referred to as flat names because — unlike DNS names — SAM
account names do not use hierarchical naming. Because SAM names are flat, each one must be
unique in the domain.

Organizational Units
Active Directory allows administrators to create a hierarchy within a domain that meets the needs of
their organization. The object class of choice for building these hierarchies is the
organizationalUnit, a general-purpose container that can be used to group most other object
classes together for administrative purposes. An organizational unit in Active Directory is analogous
to a directory in the file system; it is a container that can hold other objects. You can use
organizational units for purposes such as creating an administrative hierarchy, applying Group
Policy, and delegating control of administration.
Administrative Hierarchy
Organizational units can be nested to create a hierarchy within a domain and form logical
administrative units for users, groups, and resource objects, such as printers, computers,
applications, and file shares. The organizational unit hierarchy within a domain is independent of the
structure of other domains; each domain can implement its own hierarchy. Likewise, domains that
are managed by a central authority can implement similar organizational unit hierarchies. The
structure is completely flexible, which allows organizations to create an environment that mirrors
the administrative model, whether it is centralized or decentralized.
Group Policy
Group Policy can be applied to organizational units to define the abilities of groups of computers and
users that are contained within the organizational units. Levels of control range from complete
desktop lockdown to a relatively autonomous user experience. Group Policy can affect functionality,
such as what applications are available to a group of users, what features within an application are
accessible on a particular computer, where documents are saved, and can affect access and user
permissions. Group Policy also affects where, when, and how application and operating system
updates or special scripts are applied.
Group Policy settings are stored as Group Policy objects in Active Directory. A Group Policy object
can be associated with one or more Active Directory containers, such as a site, domain, or
organizational unit.
Delegation of Control
The Active Directory object-based security model implements default access control that is
propagated down a subtree of container objects. You can use this technology to determine the

38
security for an entire group of objects according to the security that you set on the organizational
unit that contains the objects. By default, most Active Directory objects inherit ACEs from the
security descriptor on their parent container object. If necessary, you can change the inherited
permissions. However, as a best practice, you should avoid changing the default permissions or
inheritance settings on Active Directory objects unless you have additional security requirements.
Note
Because Active Directory is indexed, you can organize the tree to match your administrative

model, instead of having to organize it for ease of browsing.
Inheritance enables the access control information defined at a container object in Active Directory
to apply to the security descriptors of any subordinate objects, including other containers and their
objects. One benefit this provides is that it eliminates the need to apply permissions each time a
child object is created. You can apply or delegate administrative control over directory objects to
organizational units by setting access control.
This inheritance of access effectively delegates administrative control to individuals in the
organization. The best way to take full advantage of delegation and inherited control on directory
objects is to organize the hierarchy to match the way that the directory is administered.

User Accounts and Access Control


Active Directory authenticates and authorizes security principals such as user, inetorgperson, and
computer accounts to access shared resources on the network. The Local Security Authority (LSA) is
the security subsystem responsible for all interactive user authentication and authorization services
on a local computer. The LSA also processes authentication requests made through the Kerberos V5
protocol or the NTLM protocol in Active Directory.
Once the identity of a user is verified in Active Directory, the LSA on the authenticating domain
controller creates a security access token for that user. An access token contains the name of the
user, the groups to which that user belongs, a SID for the user, SIDs included in the SIDHistory
property and all of the SIDs for the groups to which the user belongs.
The information in the access token is used to determine the level of access a user has to resources
whenever the user attempts to access them. The SIDs in the access token are compared with the
list of SIDs that make up the discretionary access control list (DACL) on the resource to ensure that
the user has sufficient permission to access the resource. This is because the access control process
identifies user accounts by SID rather than by name.
Note
When a domain controller provides an access token to a user, the access token only contains

information about membership in domain local groups if the domain local groups are local to the
domain where the domain controller is located.
User Authorization
In addition to securing network access through user authentication, Active Directory protects shared
resources by facilitating user authorization. Once a user logon has been authenticated by Active
Directory, the user rights assigned to the user through security groups and the permissions
assigned on the shared resource determine if the user will be authorized to access that resource.
This authorization process protects shared resources from unauthorized access and permits access
to only authorized users or groups.
Administrators can use access control to manage user access to shared resources for security
purposes. In Active Directory, access control is administered at the object level by setting different
levels of access, or permissions, to objects, such as Full Control, Write, Read, Delete, or No Access.

39
Access control in Active Directory defines how users can use Active Directory objects. By default,
most permissions on objects in Windows Server 2003 Active Directory are at the most secure
setting.
The elements that define access control permissions on Active Directory objects include security
descriptors and the concept of object inheritance.
Security Descriptors
Access control permissions are assigned to securable objects and Active Directory objects to control
how different users can use each object. A securable object, or shared resource, is an object that is
intended to be used over a network by one or more users, and includes files, printers, folders, and
services. All securable objects and Active Directory objects store access control permissions in
security descriptors.
A security descriptor contains two access control lists (ACLs) used to assign and track security
information for each object: these are the discretionary access control list (DACL) and the system
access control list (SACL).
Discretionary access control lists (DACLs). DACLs identify the users and groups that are assigned or
denied access permissions on an object. If a DACL does not explicitly allow access to a user, or to
any group that a user is a member of, the user is implicitly denied access to that object. By default,
a DACL is controlled by the owner of an object or the person who created the object (In Windows,
the creator of an object is also the owner). The DACL contains access control entries (ACEs) that
determine user access to the object.
System access control lists (SACLs). SACLs identify the users and groups that you want to audit
when they successfully access or fail to access an object. Auditing is used to monitor events related
to system or network security, to identify security breaches, and to determine the extent and
location of any damage. By default, a SACL is controlled by the owner of an object or the person
who created the object. A SACL contains ACEs that determine whether to record a successful or
failed attempt by a user to access an object using a given permission, for example, Full Control or
Read.
Active Directory allows you to apply access control permissions to objects at very low levels,
including the ability to assign permissions on a per-attribute basis. To view DACLs and SACLs on
Active Directory objects using Active Directory Users and Computers, on the View menu, click
Advanced Features to access the Security tab for each object. You can also use the DSACLS support
tool to manage access control lists in Active Directory.
By default, DACLs and SACLs are associated with every Active Directory object, which helps reduce
attacks to your network by malicious users or any accidental mistakes made by domain users.

Computer Accounts
Each computer account created in Active Directory has a relative distinguished name, a pre-
Windows 2000 computer name (Security Accounts Manager account name), a primary DNS suffix, a
DNS host name, and a service principal name (SPN). The administrator enters the computer name
when creating the computer account. This computer name is used as the LDAP relative
distinguished name.
Active Directory suggests the pre-Windows 2000 name using the first 15 bytes of the relative
distinguished name. The administrator can change the pre-Windows 2000 name at any time.
The DNS name for a host is called a full computer name and is a DNS fully qualified domain name
(FQDN). The full computer name is a concatenation of the computer name (the first 15 bytes of the
SAM account name of the computer account without the $ character) and the primary DNS suffix

40
(the DNS domain name of the domain in which the computer account exists). It is listed on the
Computer Name tab in System Properties in Control Panel.
By default, the primary DNS suffix portion of the FQDN for a computer must be the same as the
name of the Active Directory domain where the computer is located. To allow different primary DNS
suffixes, a domain administrator might create a restricted list of allowed suffixes by creating the
msDS-AllowedDNSSuffixes attribute in the domain object container. This attribute is created and
managed by the domain administrator using Active Directory Service Interfaces (ADSI) or LDAP.
The SPN is a multivalue attribute. It is usually built from the DNS name of the host. The SPN is used
in the process of mutual authentication between the client and the server hosting a particular
service. The client finds a computer account based on the SPN of the service to which it is trying to
connect. The SPN can be modified by members of the Domain Admins group.
Top of page
Domain and Forest Protocols and Programming Interfaces
The primary protocol that is used by domain controllers in every domain throughout the forest is
LDAP, which runs on top of TCP/IP. LDAP is both a protocol and an API. In addition, the secured
communications between domain controllers must use the remote procedure call (RPC) protocol for
Messaging Application Programming Interface (MAPI), replication, domain controller management,
and SAM-related operations.
LDAP
LDAP is a directory service protocol that specifies directory communications. It runs directly over
TCP/IP, and it can also run over User Datagram Protocol (UDP) connectionless transports. Clients
can use LDAP to query, create, update, and delete information that is stored in a directory service
over a TCP connection through the TCP default port 389. Active Directory supports LDAP v2 (RFC
1777) and LDAP v3 (RFC 3377). LDAP v3 is an industry standard that can be used with any
directory service that implements the LDAP protocol. LDAP is the preferred and most common way
of interacting with Active Directory.
Historically, LDAP is a simplified (lightweight) version of Directory Access Protocol (DAP), which is
the original protocol that was used to interact with X.500 directories. X.500 defines an earlier set of
standards that was developed by the International Organization for Standardization (ISO). LDAP is
simpler than DAP in two key ways:
Rather than using its own protocol stack according to the Open Systems Interconnection (OSI)

networking model, LDAP communicates over Internet Protocol (IP) by using either UDP or TCP.
• LDAP syntax is easier to use than DAP syntax.
For these reasons, LDAP is widely used and accepted as the standard protocol for directory service
access. The following key aspects characterize LDAP:
The protocol is carried directly over TCP for connection-oriented transport (receipt of data is

acknowledged) and over UDP for connectionless transport (sent or received data is not
acknowledged).
Most protocol data elements can be encoded as ordinary strings, for example, as distinguished

names.
• Referrals to other servers can be returned to the client.
Simple Authentication and Security Layer (SASL) mechanisms can be used with LDAP to provide

associated security services. SASL Digest is supported in Windows Server 2003 as the
authentication standard for LDAP.
Attribute values and distinguished names can be internationalized using the ISO 10646 character

set.

41
The protocol can be extended to support new operations, and controls can be used to extend

existing operations.
The schema is published through an attribute on the directory root object (rootDSE) for use by

clients.
For more information about LDAP, see “How Active Directory Searches Work.”
RPC
Active Directory uses remote procedure call (RPC) for replication (REPL) and domain controller
management communications, MAPI communications, and SAM-related communications. RPC is a
powerful, robust, efficient, and secure interprocess communication (IPC) mechanism that enables
data exchange and invocation of functionality located in a different process. That different process
can be on the same computer, on the local area network (LAN), or across the Internet.

Authentication Protocols
Domain controllers authenticate users and applications by using one of two protocols: either the
Kerberos version 5 authentication protocol or the NTLM authentication protocol. When two Active
Directory domains or forests are connected by a trust, authentication requests made using these
protocols can be routed to provide access to resources in both forests.
NTLM
The NTLM protocol is the default protocol used for network authentication in the Windows NT 4.0
operating system. For compatibility reasons, it is used by Active Directory domains to process
network authentication requests that come from earlier Windows-based clients and servers.
Computers running Windows 2000, Windows XP or Windows Server 2003 use NTLM only when
authenticating to servers running Windows NT 4.0 and when accessing resources in Windows NT 4.0
domains.
When the NTLM protocol is used between a client and a server, the server must contact a domain
authentication service on a domain controller to verify the client credentials. The server
authenticates the client by forwarding the client credentials to a domain controller in the client
account domain.
Kerberos Version 5 Protocol
The Kerberos version 5 protocol is the default authentication protocol used by computers running
Windows 2000, Windows XP Professional, or Windows Server 2003. This protocol is specified in RFC
1510 and is fully integrated with Active Directory, server message block (SMB), HTTP, and RPC, as
well as the client and server applications that use these protocols. In Active Directory domains, the
Kerberos protocol is used to authenticate logons when any of the following conditions is true:

• The user who is logging on uses a security account in an Active Directory domain.
The computer that is being logged on to is a Windows 2000, Windows XP or Windows

Server 2003–based computer.
• The computer that is being logged on to is joined to an Active Directory domain.

• The computer account and the user account are in the same forest.
The computer from which the user is trying to access resources is located in a non-Windows-

based operating system Kerberos version 5 realm.
If any computer involved in a transaction does not support the Kerberos version 5 protocol, the
NTLM protocol is used.
The authentication protocol of choice for Active Directory authentication requests is Kerberos V5. In
contrast to NTLM, when the Kerberos protocol is used, the server does not have to contact the
domain controller. Instead, the client gets a ticket for a server by requesting one from a domain

42
controller in the server account domain; the server validates the ticket without consulting any other
authority.
For more information about Kerberos, see “How the Kerberos Version 5 Authentication Protocol
Works.”

Active Directory APIs


You can use the following application programming interfaces (APIs) to access information in any
LDAP directory including Active Directory:
Active Directory Service Interface

(ADSI)
• LDAP C API

• REPL

• MAPI

• SAM
ADSI
Active Directory Service Interfaces (ADSI) provides a simple, powerful, object-oriented interface to
Active Directory. ADSI makes it easy for programmers and administrators to create directory
programs by using high-level tools, such as Microsoft Visual Basic, without having to worry about
the underlying differences between the different namespaces.
ADSI is supplied as a software development kit that enables you to build or buy programs that give
you a single point of access to multiple directories in your network environment, whether those
directories are based on LDAP or another protocol. ADSI is fully scriptable for ease of use by
administrators.
ADSI also enables access to Active Directory by exposing objects stored in the directory as
Component Object Model (COM) objects. A directory object is manipulated using the methods
available on one or more COM interfaces. ADSI has a provider-based architecture that allows COM
access to different types of directories for which a provider exists.
LDAP C
The LDAP C API, defined in Internet standard RFC 1823, is a set of low-level C-language APIs to the
LDAP protocol. Microsoft supports LDAP C APIs on all Windows platforms.
Developers have the choice of writing Active Directory-enabled applications using LDAP C APIs or
ADSI. LDAP C APIs are most often used to ease portability of directory-enabled applications to the
Windows platform. ADSI is a more powerful language and is more appropriate for developers writing
directory-enabled code on the Windows platform.
LDAP is the primary interface for the data store. Directory clients use LDAP v3 to connect to the
DSA through the LDAP interface. The LDAP interface is part of Wldap32.dll. LDAP v3 is backward
compatible with LDAP v2.
REPL
The REPL management interface provides functionality for finding data about domain controllers,
converting the names of network objects between different formats, manipulating SPNs and DSAs,
and managing replication of servers.
MAPI
Messaging clients gain access to the Microsoft Exchange Server directory service by using MAPI
address book providers. For compatibility with existing messaging clients, Active Directory supports
the MAPI-RPC address book provider, which provides access to Active Directory, for example, to find
the telephone number of a user.

43
SAM
Security Accounts Manager (SAM) is a proprietary interface for connecting to the DSA on behalf of
clients that use Windows NT 4.0 or earlier. These clients use Windows NT 4.0 networking APIs to
connect to the DSA through SAM. Replication with Windows NT 4.0 backup domain controllers
(BDCs) goes through the SAM interface as well.
Top of page
Domain and Forest Processes and Interactions
Many network related operations depend on domains and forests in order to complete various tasks.
This section describes some of the processes and interactions that commonly occur within the
boundaries of domains or forests.

Logging on to a Domain
When a user with an account in an Active Directory domain logs on at the keyboard of a computer
running Windows 2000, Windows XP or Windows Server 2003, the logon request is processed in
three stages:
1. The user requests admission to the ticket-granting service for the domain.
This is accomplished through an Authentication Service (AS) Exchange between the Kerberos
security support provider (SSP) on the computer and the Key Distribution Center (KDC) in the
domain in which the user account exists. The result is a ticket-granting ticket (TGT) that the
user can present in future transactions with this KDC.
2. The user requests a ticket for the computer.
This is accomplished through a Ticket-Granting Service (TGS) Exchange between the Kerberos
SSP on the computer and the KDC for the domain in which the computer account exists. The
result is a session ticket that the user can present when he or she requests access to system
services on the computer.
3. The user requests admission to Local System services on the computer.
This is accomplished when the Kerberos SSP on the computer presents a session ticket to the
LSA on the computer.
If the account domain of the computer is different from the account domain of the user, an extra
step is involved. Before the Kerberos SSP can request a session ticket for the computer, it must ask
the KDC in the domain where the user account exists for a TGT that is good for admission to the
KDC in the domain where the computer account exists. The SSP can then present the TGT to the
KDC in the domain of the computer and get a session ticket for the computer.
The precise steps in the logon process depend on how the computer is configured. With standard
configurations of Windows, interactive users log on with a password. In another optional
configuration of Windows, users log on with a smart card. Although the basic process is the same
for both configurations, there are some differences. For more information about the domain logon
process, see “How Interactive Logon Works.”

Processing Authentications Across Domains and Forests


When a request for authentication is referred to a domain, before the domain controller in that
domain authenticates the user to access resources in the domain, it must determine whether a trust
relationship exists with the domain from which the request comes, as well as the direction of the
trust and whether the trust is transitive or nontransitive. The authentication process between
trusted domains varies according to the authentication protocol in use. The Kerberos version 5 and
NTLM protocols in Windows 2000 Server and Windows Server 2003 process referrals for
authentication to a domain differently, as do other authentication protocols, such as Digest and
SChannel, that Windows 2000 Server and Windows Server 2003 support.

44
In an Active Directory environment the Kerberos-based authentication process is most commonly
used. To access a shared resource in another domain by using Kerberos authentication, a computer
where the user logs on first requests a ticket from a domain controller in its account domain to the
server in the trusting domain that hosts the requested resource. This ticket is then issued by an
intermediary trusted by both the requesting computer and the server. The computer then presents
this trusted ticket to the server in the trusting domain for authentication. This process, however,
becomes more complex when a workstation in one forest attempts to access data on a resource
computer in another forest.
In this case, the Kerberos authentication process contacts the domain controller for a service ticket
to the SPN of the resource computer. Once the domain controller queries the global catalog and
determines that the SPN is not in the same forest as the domain controller, the domain controller
sends a referral for its parent domain back to the workstation. At that point, the workstation queries
the parent domain for the service ticket and continues to follow the referral chain until it reaches
the domain where the resource is located. For more detailed information about how authentication
requests are processed across domains and forests, see “How Domain and Forest Trusts Work.”

Joining a Computer to a Domain


Joining a computer to a domain creates the computer account object for the client in an Active
Directory location where all computer accounts are created by default during a join operation. The
default location is set to the Computers container in Active Directory. A computer account differs
from a user account in that it identifies the computer to the domain, while a user account identifies
a user to a computer.
The act of joining a computer to a domain creates an account for the computer on the domain if it
does not already exist. When you join a client computer running Windows NT 4.0, Windows 2000,
Windows XP or Windows Server 2003 to an Active Directory domain, the following events occur:

• The domain name is validated.

• A domain controller in the domain is located through a call to the DsGetDcName API.
A session is established with the domain controller under the security context of the passed-in

credentials that are supplied in the Network Identification tab under System Properties in
Control Panel.
The computer account is enabled. If the flags so specify (NETSETUP_ACCT_CREATE), the APIs

create the computer account on the domain controller.
• The local password for this account is created in the Local Security Authority (LSA).
The local primary domain information LSA policy is set to refer to the new domain. This includes

the domain name and the domain SID.
Note
For clients running Windows 2000, Windows XP and Windows Server 2003 only, the LSA policy

consists of the domain name, domain SID, DNS domain name, DNS forest name, and domain
GUID.
• The DNS name assigned to the local computer is updated.
The local group membership is changed to add members of the Domain Admins group to the

Local Accounts Administrators group.
• The Net Logon trusted domain cache is initialized to the trusted domains domain list.
For clients running Windows 2000, Windows XP and Windows Server 2003 only, the Windows

Time Service is enabled and started.
• The Net Logon service is started.

45
To join a workstation or member server to a domain, you can use the Netdom tool. For example, to
join a workstation to the Wingtiptoys.com domain in the engineering organizational unit, type the
following command at the workstation:
Netdom join /d:wingtiptoys.com /OU:OU=engineering,DC=wingtiptoys,DC=com
When a computer joins a domain, the following changes occur on domain controllers running
Windows NT 4.0, Windows 2000 Server and Windows Server 2003:
A computer object is created. The name of this object is generated by appending a dollar sign ($)

to the name (uppercase letters) of the client.
On Windows 2000 Server– and Windows Server 2003–based domain controllers only, the Net

Logon service creates SPNs on the computer object.
Netsetup.log
When joining a computer to an Active Directory domain, the Networking Setup (NetSetup) utility
installs all the necessary Microsoft supported networking components. The Netsetup.log file provides
information about attempts to join domains and records any errors that might prevent the join
operation from succeeding.

Registering DNS Names and Locating Domain Controllers


When a Windows 2000 Server–based server or Windows Server 2003–based member server is
promoted to a domain controller by installing Active Directory, the Net Logon service registers the
DNS resource records necessary for network hosts and services to locate the domain controller on
the network. When network hosts and services attempt an operation (such as joining a domain)
that requires a domain controller, the locator mechanism attempts to locate the domain controller
through DNS.
DNS is used because every Active Directory–based domain controller dynamically registers SRV
records in DNS. The SRV records enable servers to be located by service type (for example, LDAP)
and protocol (for example, TCP). Because domain controllers are LDAP servers that communicate
over TCP, SRV records can be used to find the DNS computer names of domain controllers. In
addition to registering LDAP-specific SRV records, Net Logon also registers Kerberos v5
authentication protocol–specific SRV records to enable locating servers that run the Kerberos Key
Distribution Center (KDC) service.
Domain controllers not only register their DNS domain names on a DNS server, but also register
their NetBIOS names by using a transport-specific mechanism (for example, WINS). Thus, a DNS
client locates a domain controller by querying DNS, and a NetBIOS client locates a domain controller
by querying the appropriate transport-specific name service. The domain controller locator service
consists of two main parts:

• Locator finds which domain controllers are registered with a DNS server.
Locator submits a DNS query to the DNS server to locate a domain controller in the specified

domain.
After this query is resolved, an LDAP User Datagram Protocol (UDP) lookup is sent to one or more of
the domain controllers listed in the response to the DNS query to ensure their availability. Finally,
the Net Logon service caches the discovered domain controller to aid in resolving future requests.
For more information about the DC Locator process, see “How DNS Support for Active Directory
Works.”

Raising Domain and Forest Functional Levels


When Active Directory is installed on a server running Windows Server 2003, a set of basic Active
Directory features is enabled by default. In addition to the basic Active Directory features on

46
individual domain controllers, there are new domain- and forest-wide Active Directory features
available when all domain controllers in a domain or forest are running Windows Server 2003.
To enable the new domain-wide features, all domain controllers in the domain must be running
Windows Server 2003, and the domain functional level must be raised to Windows Server 2003. To
enable new forest-wide features, all domain controllers in the forest must be running Windows
Server 2003, and the forest functional level must be raised to Windows Server 2003.
Before raising the forest functional level to Windows Server 2003, verify that all domains in the
forest are set to the domain functional level of Windows 2000 native or Windows Server 2003. Note
that domains that are set to the domain functional level of Windows 2000 native will automatically
be raised to Windows Server 2003 at the same time the forest functional level is raised to Windows
Server 2003. For more detailed information about functional levels, see the “How Active Directory
Functional Levels Work.”

Replicating Directory Partitions


Active Directory uses RPC over IP to transfer replication data between domain controllers. RPC over
IP is used for both intersite and intrasite replication. To keep data secure while in transit, RPC over
IP replication uses both authentication (using the Kerberos V5 authentication protocol) and data
encryption.
When a direct or reliable IP connection is not available, replication between sites can be configured
to use the Simple Mail Transfer Protocol (SMTP). However, SMTP replication functionality is limited,
and requires an enterprise certification authority (CA). SMTP can only be used to replicate the
configuration, schema and application directory partitions, and does not support the replication of
domain directory partitions. For more detailed information about the replication process, see “How
Active Directory Replication Topology Works.”
Top of page
Network Ports Used by Domains and Forests
The following tables list the network ports that are associated with domains and forests.
Port Assignments for Raising Active Directory Functional Levels

Service Name UDP TCP

LDAP 389 389

LDAP SSL N/A 636


Port Assignments for Data Store

Service Name UDP TCP

LDAP 389 389

LDAP SSL N/A 636

RPC Endpoint Mapper 135 135

Global Catalog LDAP N/A 3268

Global Catalog LDAP SSL N/A 3269


Port Assignments for Service Publication and SPNs

47
Service Name UDP TCP

LDAP 389 389

LDAP SSL N/A 636

RPC Endpoint Mapper 135 135

Global Catalog LDAP N/A 3268

Global Catalog LDAP SSL N/A 3269

Kerberos 88 88
Port Assignments for Raising Active Directory Searches

Service Name UDP TCP

LDAP 389 389

LDAP SSL N/A 636

Global Catalog LDAP N/A 3268

Global Catalog LDAP SSL N/A 3269


Port Assignments for Global Catalogs

Service Name UDP TCP

LDAP N/A 3268

LDAP N/A 3269 (global catalog Secure Sockets Layer [SSL])

LDAP 389 389

LDAP N/A 686 (SSL)

RPC/REPL 135 135 (endpoint mapper)

Kerberos 88 88

DNS 53 53

SMB over IP 445 445


Port Assignments for Replication

Service Name UDP TCP

LDAP 389 389

LDAP N/A 686 (SSL)

RPC/REPL N/A 135 (endpoint mapper)

LDAP N/A 3268

Kerberos 88 88

DNS 53 53

48
Service Name UDP TCP

SMB over IP 445 445


Port Assignments for Operations Masters

Service Name UDP TCP

LDAP 389 389

LDAP N/A 686 (SSL)

RPC/REPL N/A 135 (endpoint mapper)

Netlogon N/A 137

Kerberos 88 88

DNS 53 53

SMB over IP 445 445


Port Assignments for Interactive Logon

Service Name UDP TCP

Kerberos 88 88

Local Security Authority (LSA) RPC Dynmaic RPC Dynamic RPC

NTLM Dynamic Dynamic


Port Assignments for Kerberos V5 Protocol

Service Name UDP TCP

DNS 53 53

Kerberos 88 88
Port Assignment for DC Locator

Service Name UDP TCP

LDAP 389 389


The following table shows the list of ports that might need to be opened before you establish trusts.
Ports Required for Trusts

Task Outbound Ports Inbound Ports From–To

Set up trusts on both sides LDAP (389 UDP and N/A Internal domain domain
from the internal forest TCP) controllers–External
Microsoft SMB (445 domain domain
TCP) controllers (all ports)
Kerberos (88 UDP)

49
Task Outbound Ports Inbound Ports From–To

Endpoint resolution —
portmapper (135 TCP)
Net Logon fixed port

Trust validation from the LDAP (389 UDP) N/A Internal domain domain
internal forest domain Microsoft SMB (445 controllers–External
controller to the external TCP) domain domain
forest domain controller Endpoint resolution — controllers (all ports)
(outgoing trust only) portmapper (135 TCP)
Net Logon fixed port

Use Object picker on the N/A LDAP (389 UDP and External server–Internal
external forest to add objects TCP) domain PDCs (Kerberos)
that are in an internal forest Windows NT External domain domain
to groups and DACLs Server 4.0 directory controllers–Internal
service fixed port domain domain
Net Logon fixed port controllers (Net Logon)
Kerberos (88 UDP)
Endpoint resolution
portmapper (135
TCP)

Set up trust on the external N/A LDAP (389 UDP and External domain domain
forest from the external forest TCP) controllers–Internal
Microsoft SMB (445 domain domain
TCP) controllers (all ports)
Kerberos (88 UDP)

Use Kerberos authentication Kerberos (88 UDP) N/A Internal client–External


(internal forest client to domain domain
external forest) controllers (all ports)

Use NTLM authentication N/A Endpoint resolution – External domain domain


(internal forest client to portmapper (135 controllers–Internal
external forest) TCP) Net Logon fixed domain domain
port controllers (all ports)

Join a domain from a LDAP (389 UDP and N/A Internal client–External
computer in the internal TCP) domain domain
network to an external Microsoft SMB (445 controllers (all ports)
domain TCP)
Kerberos (88 UDP)
Endpoint resolution —
portmapper (135 TCP)
Net Logon fixed port
Windows NT

50
Task Outbound Ports Inbound Ports From–To

Server 4.0 directory


service fixed port

How Operations Masters Work


Updated: March 28, 2003
In this section

• Operations Masters Protocols

• Operations Masters Roles and Functionality

• Role Transfer and Seizure

• Network Ports Used by Operations Masters

• Related Information
Active directory is a multimaster enabled database, which provides the flexibility of allowing
changes to the directory to occur at any domain controller. However, some changes, if performed in
a multimaster fashion, can cause errors in the Active Directory database. For these changes, one
domain controller, called an operations master, is assigned to accept requests for such changes.
Operations masters perform updates to the directory in a single-master fashion, meaning that only
the domain controller assigned to hold the operations master role is allowed to process the update.
This single-master update model prevents conflicting updates from being made to the Active
Directory database.
Five operations masters are assigned to perform specific tasks in an Active Directory environment.
The schema master and the domain naming master are forestwide roles, meaning that only one of
each of these types of operations masters is in a forest. The relative identifier (RID) master,
infrastructure master, and primary domain controller (PDC) emulator master are domainwide roles,
meaning one of each of these types of operations masters is in each domain in a forest.
Because operation masters are used to perform specific tasks and are important to the performance
of the directory, they must be available to all domain controllers and directory clients that require
their services.
This section describes the functionality and interactions of each operations master in a Windows
Server 2003 Active Directory environment.

Operations Master Protocols


Operations masters use the same protocols as other Active Directory domain controllers. The
protocols that package the data sent to and from a domain controller are described in the following
table.
Operations Master Protocols

51
Protocol Description

Lightweight The primary directory service protocol that specifies directory communications. It
directory access runs directly over TCP/IP, and it can also run over UDP connectionless transports
protocol (LDAP) (UDP access is primarily used by the domain controller Locator process). Clients
use LDAP to query, create, update, and delete information that is stored in a
directory service over a TCP connection through the TCP port 389. Active Directory
supports LDAP v2 (RFC 1777) and LDAP v3 (RFC 2251). LDAP v3 is an industry
standard that can be used with any directory service that implements the LDAP
protocol. LDAP is the preferred and most common way of interacting with Active
Directory.

Remote Protocol for replication (REPL), domain controller management communications,


procedure call and SAM-related communications. RPC is a powerful, robust, efficient, and secure
(RPC) interprocess communication (IPC) mechanism that enables data exchange and
invocation of functionality residing in a different process. That different process
can be on the same computer, on the local area network (LAN), or across the
Internet.

Simple mail Protocol for replication communications when a permanent, “always on” network
transfer protocol connection does not exist between two domain controllers. SMTP is used to
(SMTP) transport and deliver messages based on specifications in Request for Comments
(RFC) 821 and RFC 822. SMTP can replicate configuration, schema, and global
catalog replicas only (not writable domain data).
For more information about Active Directory protocols, see “How the Data Store Works.”
Top of page
Operations Master Roles and Functionality
Five operations master roles manage single-master operations in Active Directory.
Two operations master roles exist in each forest:

• The schema master, which governs all changes to the schema.


The domain naming master, which adds and removes domains to and from the

forest.
In addition to the two forestwide operations master roles, three operations master roles exist in
each domain:
The primary domain controller (PDC) emulator. The PDC emulator processes all replication

requests from Microsoft Windows NT 4.0 backup domain controllers and processes all password
updates for clients that are not running Active Directory–enabled client software.
The relative identifier (RID) master. The RID master allocates RIDs to all domain controllers

to ensure that all security principals have a unique identifier.
The infrastructure master. The infrastructure master for a given domain maintains a list of the

security principals from other domains that are members of groups within its domain.

Schema Master
The schema master controls all originating updates to the schema. The schema contains the master
list of object classes and attributes that are used to create all Active Directory objects, such as
computers, users, and printers. The domain controller that holds the schema master role is the only

52
domain controller that can perform write operations to the directory schema. These schema updates
are replicated from the schema operations master to all other domain controllers in the forest.

Domain Naming Master


The domain naming master is a forestwide operations master role because it manages the addition
and removal of all directory partitions, regardless of domain, in the forest hierarchy. The domain
controller that has the domain naming master role must be available in order to perform the
following actions:

• Add or remove domains.


Add or remove application directory

partitions.
• Add or remove cross-reference objects.

• Validate domain rename instructions.


Add or Remove Domains
Only the domain controller holding the domain naming master role has the authority to add a new
domain. The domain naming master manages this process, preventing multiple domains from
joining the forest with the same domain name. When you use the Active Directory Installation
Wizard to create or remove a child domain, it contacts the domain naming master and requests the
addition or deletion. The domain naming master is responsible for ensuring that domain names are
unique. If the domain naming master is unavailable, you cannot add domains or remove them from
the forest.
Add or Remove Application Directory Partitions
Application directory partitions are special partitions that can be created on domain controllers
running Windows Server 2003 to provide LDAP storage for dynamic data. In Windows 2000 Server,
nondomain data is limited to configuration and schema data, which is replicated to every domain
controller in the forest. In a Windows Server 2003 forest, application directory partitions provide
storage for nondomain, application-specific data that can be replicated to any arbitrary set of
domain controllers in the forest — as few or as many as needed by the application that uses the
data.
The Domain Name System (DNS) creates and uses application directory partitions by default when
it is installed in a Windows Server 2003 forest. DNS automatically creates two default DNS
application directory partitions below the forest root domain in the domain hierarchy, one for the
forest (ForestDnsZones) and one for the domain (DomainDnsZones). Thereafter, when you install
Active Directory to create a new domain in the forest and configure that server to be a DNS server,
DNS creates one default application directory partition below the new domain directory partition in
the domain hierarchy.
If the domain controller hosting the domain naming operations master role is not running Windows
Server 2003, or if it is unavailable, you cannot add or remove application directory partitions from
the forest.
Add or Remove Cross-Reference Objects
When Active Directory is installed to create the first domain controller in a new forest, the schema,
configuration, and domain directory partitions are created on the domain controller. At this time, a
cross-reference object (class crossRef) is created for each directory partition in the Partitions
container in the configuration directory partition
(CN=partitions,CN=configuration,DC=forestRootDomain). Creation of each subsequent directory
partition in the forest, either by installing Active Directory to create a new domain or by creating a

53
new application directory partition on an existing domain controller, initiates the creation of an
associated cross-reference object in the Partitions container.
Note
You can also manually create a cross-reference object for an application directory

partition.
A cross-reference object identifies the name and server location of each directory partition in the
forest. The replication system uses this information to identify servers that store the same directory
partitions. LDAP queries use cross-reference objects to create referrals to different domains. If the
domain naming master is unavailable, you cannot add or remove cross-reference objects.
Validate Domain Rename Instructions
When you use the domain rename tool, rendom.exe, to rename an Active Directory domain, the tool
must be able to access the domain naming operations master. The domain naming master is the
domain controller responsible for validating the instructions that the domain rename tool has
generated for the new forest.
Certain changes occur on the domain naming master in preparation for the actual execution of a
domain rename. On the domain naming master, the XML-encoded script containing the domain
rename instructions is written to the msDS-UpdateScript attribute on the Partitions container
object (cn=partitions,cn=configuration,dc=ForestRootDomain) in the configuration directory
partition. The Partitions container can only be updated on the domain controller holding the domain
naming operations master role for the forest. Therefore, the msDS-UpdateScript attribute must be
changed on the domain naming master.
In addition to the msDS-UpdateScript attribute value being written to the Partitions container, the
new DNS name of each domain being renamed is also written by rendom.exe to the msDS-
DnsRootAlias attribute on the cross-reference object (class crossRef) corresponding to that
domain. Again, because cross-reference objects are stored in the Partitions container and the
Partitions container can only be updated on the domain naming master, the msDS-DnsRootAlias
attribute can only be changed on the domain naming operations master.
The domain naming master replicates the script stored in the msDS-UpdateScriptattribute and the
DNS name in the msDS-DnsRootAlias attribute to all domain controllers in the forest through
scheduled replication of the configuration directory partition.
In preparation of a domain rename operation; rendom.exe uses the domain naming operations
master to:
Retrieve all information needed to compute the list of rename instructions for updating the

configuration and schema directory partitions.
• Write the resulting script to the msDS-UpdateScript attribute of the Partitions container.
Change the msDS-DnsRootAlias attribute of all cross-reference objects for domains that are

being renamed.
Validate the forest description against the current state of the forest. If any of these validity

checks fails, the command fails. The following requirements are verified:

• Each existing domain is part of the new forest.

• The new forest is well formed.


The new forest does not re-assign domain names that are being relinquished as part of the

current domain rename operation.
For more information about domain rename, see “Domain Rename Technical Reference.”

Relative Identifier Master

54
The relative identifier (RID) master is a domainwide operations master role. The RID master is
responsible for allocating sequences of unique RIDs to each domain controller in its domain and for
moving objects from one domain to another.
You can create a new security principal object (user, group, or computer) on any domain controller.
When you create a security principal object, the domain controller attaches a unique Security ID
(SID) to the object. There are four elements of a domain SID, one of which is the RID for the
domain.
The following table describes the elements of the domain SID: S-1-5-Y1-Y2-Y3-Y4.
Elements of a Security Identifier (SID)

SID Description
element

S-1 Indicates a revision 1 SID. At this time 1 is the only SID revision in use.

5 Indicates the issuing agency or issuing authority. A 5 always indicates Windows NT,
Windows 2000 Server, or Windows Server 2003 domains. Note that a well-known SID
may use a 1 or 0 as the issuing identity to signify that it is a well known SID.

Y1-Y2-Y3 The domain identifier portion of the SID. This is the same for every security principal
object created in that domain.

Y4 The relative ID (RID) for the domain, which represents a user name or group. This is
obtained from the RID pool on a domain controller at the time the object is created.
RID Allocation
Domain controllers running Windows 2000 and Windows Server 2003 have a shared RID pool. The
RID operations master is responsible for maintaining a pool of RIDs to be used by the domain
controllers in its domain and for providing groups of RIDs to each domain controller when
necessary. When a new domain controller running Windows 2000 or Windows Server 2003 is added
to the domain, the RID master allocates a batch of approximately 500 RIDs from the domain RID
pool to that domain controller. Each time a new security principal is created on a domain controller,
the domain controller draws from its local pool of RIDs and assigns one to the new object. When the
number of RIDs in a domain controller’s RID pool falls below approximately 100, that domain
controller submits background requests (by means of RPC) for additional RIDs from the domain’s
RID master. The RID master allocates a block of approximately 500 RIDs from the domain’s RID
pool to the pool of the requesting domain controller.
The RID master does not actually maintain a pool of numbers. Rather, it maintains the highest
value of the last range it allocated. When a new request is received, it increments that value by one
to establish the low value in the new RID pool and then adds four hundred and ninety nine to
establish the new maximum value. It sends these two values to the requesting domain controller to
use as its next allocation of RIDs.
If a domain controller’s local RID pool is empty, and it cannot contact the domain’s RID master to
request additional RIDs, the domain controller will log event ID 16645, indicating that the maximum
account identifier allocated to the domain controller has been assigned and the domain controller
has failed to obtain a new identifier pool from the RID master. Likewise, when attempting to add
new objects in Active Directory, such as users, computers, or domain controllers, you might notice
event ID 16650 in the System log indicating that the object cannot be created because the directory

55
service was unable to allocate a relative identifier. Network connectivity to the RID master might
have been lost or the RID master might have been removed from the network. In any case, you
cannot create new security principal objects on the domain controller until RID pool acquisition is
successful.
Cross-Domain Moves
Migrating Active Directory objects from one domain to another requires the availability of the RID
master. You can only move an object out of its domain if the domain’s RID master can be
contacted. Requiring that objects be moved from one domain to another by using the RID master
prevents Active Directory from creating two objects in different domains with the same unique
identifier. This might happen if one object is moved from two domain controllers simultaneously to
two different domains.
You can use the Active Directory Migration Tool (ADMT) to perform intraforest migrations of domain
objects from one domain (the source domain) to another (the target domain). The RID master must
be online and available in the source domain for ADMT to migrate successfully. If the RID master is
unavailable, cross-domain migration of Active Directory objects will fail.
RID Attributes in Active Directory
The following are RID-related attributes in Windows Server 2003 Active Directory:
FsmoRoleOwner
DN path: CN=RID Manager$,CN=System,DC=<domain>,DC=com
This attribute points to the Domain Name path for the current RID master’s NTDS Settings object
according to the domain controller that is being queried.
RidAvailablePool
DN path: CN=RID Manager$,CN=System,DC=<domain>,DC=com
This attribute defines the global RID space from which RID pools are allocated to the RID master.
RidAllocationPool
DN Path: CN=Rid Set,CN=<computername>,OU=domain controllers,DC=<domain>,DC=com
Each domain controller has two RID pools: the one that they are currently allocating from, and the
pool that they will use next. This attribute defines the RID pool that will be used in the domain when
the current RID pool is exhausted.
RidNextRid
DN Path: CN=Rid Set,CN=<computername>,OU=domain controllers,DC=<domain>,DC=com
This attribute defines the next free RID in the current allocation pool that is assigned to the next
security principal created on the local domain controller. RidNextRid is a non-replicated value in
Active Directory.
RidPreviousAllocationPool
DN Path: CN=Rid Set,CN=<computername>,OU=domain controllers,DC=<domain>,DC=com
This attribute defines the RID pool from which the RID master actually allocates from. The value for
RidNextRid is implicitly a member of this pool.
RidUsedPool
DN Path: CN=Rid Set,CN=<computername>,OU=domain controllers,DC=<domain>,DC=com
This attributed defines the RID pools that have been used by a domain controller.
NextRid
DN Path: DC=<domain>,DC=com
This attribute defines the next RID field used by the RID master.

Primary Domain Controller (PDC) Emulator


The PDC emulator serves as primary domain controller for compatibility with earlier Windows
operating systems. Windows 2000 Server and Windows Server 2003 interoperate with

56
Windows NT workstations, member servers, and backup domain controllers. The primary domain
controller (PDC) in a Windows NT 3.51 or Windows NT 4.0 domain is responsible for the following:

• Processing password changes from both users and computers

• Replicating updates to backup domain controllers

• Running the Domain Master Browser


Active Directory uses multimaster replication for most directory updates, including password
changes. Therefore, if the PDC emulator becomes unavailable, the impact is small. However, in a
mixed environment with Windows NT–based domain controllers and Active Directory, the
unavailability of the PDC emulator has the following impact:
When a user of a computer running Windows NT Workstation 4.0, Windows 95, or Windows 98

without the Active Directory client installed attempts a password change, the user sees a
message similar to the following: “Unable to change password on this account. Please contact
your system administrator.”
In a mixed domain, the event logs of Windows NT 4.0 BDCs contain entries showing failed

replication attempts.
In a mixed domain, when a user tries to start User Manager on a Windows NT 4.0 backup domain

controller, it results in a “domain unavailable” error message. If User Manager is already running,
you will see an “RPC server unavailable” message. Attempting to create an account by using the
net user /add command results in a “could not find domain controller for this domain” message.
When you run Server Manager, you will see a message similar to the following: “Cannot find the
primary domain controller for <domain name>. You may administer this domain, but certain
domain-wide operations will be disabled.”
As operating systems are upgraded, either to Windows XP, Windows 2000, Windows Server 2003,
or (for Windows NT Workstation 4.0, Windows 95, and Windows 98) by installing the Active
Directory client, they cease to rely on the PDC and, instead, behave in the following manner:
Clients do not make password changes at the PDC emulator. Instead, clients update passwords at

any domain controller in the domain.
The PDC emulator does not receive Windows NT 4.0 replication requests after all Windows NT 4.0

BDCs in a domain are upgraded to Windows 2000 Server or Windows Server 2003.
Clients use Active Directory to locate network resources. They do not require the Computer

Browser service.
After all computers are upgraded to Windows XP, Windows 2000 and Windows Server 2003, the
domain controller holding the PDC emulator role performs the following functions:
When password changes are performed by other domain controllers in the domain, they are sent

to the PDC emulator by using higher priority replication.
When an authentication fails with an invalid password at other domain controllers in the domain,

the authentication request is retried at the PDC emulator before failing. If a recent password
update has reached the PDC emulator, the retried authentication request should succeed.
When an authentication succeeds for an account for which the most recent authentication

attempt at the domain controller failed, the domain controller communicates this fact (“zero
lockout count”) to the PDC emulator. This resets the lockout counter at the PDC emulator in case
another client attempts to validate the same account by using a different domain controller.
Therefore, when the PDC emulator is unavailable, you might experience an increase in support
requests regarding password difficulties. However, unlike the Windows NT 4.0 environment, where
the PDC was the only computer that wrote the updated password to the domain, in Windows 2000
Server and Windows Server 2003, any domain controller can write the password update to the
directory and normal replication will ensure that all domain controllers in the domain eventually

57
become aware of the change. This will occur even if the PDC emulator operations master remains
unavailable.
Windows Server 2003 Well Known Security Principals
In Windows Server 2003, the PDC emulator operations master is responsible for managing well-
known security principals. When you upgrade your environment from Windows 2000 Server, it is
important that you upgrade the PDC emulator early in the upgrade process so that the
“CN=WellKnown Security Principals,CN=Configuration,DC=<YourDomain>” container is updated
with the well-known security principals that are introduced in Windows Server 2003.
After upgrading the Windows 2000–based domain controller holding the role of the PDC emulator in
each domain in the forest to Windows Server 2003, several new well-known and built-in groups are
created and some new group memberships are established. If you transfer the PDC emulator role to
a Windows Server 2003–based domain controller instead of upgrading it, these groups will be
created when the role is transferred. The new well-known and built-in groups are:

• Builtin\Remote Desktop Users

• Builtin\Network Configuration Operators

• Performance Monitor Users

• Performance Log Users

• Builtin\Incoming Forest Trust Builders

• Builtin\Performance Monitoring Users

• Builtin\Performance Logging Users

• Builtin\Windows Authorization Access Group

• Builtin\Terminal Service License Server


The newly established group memberships are:
If the Everyone group is in the Pre-Windows 2000 Compatible Access group, the Anonymous

Logon group and Authenticated Users group are also added to the Pre-Windows 2000 Compatible
Access group.
• The Network Servers group is added to the Performance Monitoring Users group.

• The Enterprise Domain Controllers group is added to the Windows Authorization Access group.
In addition, when upgrading the Windows 2000–based domain controller that holds the role of the
PDC emulator in the forest root domain, the following additional security principals are created:

• LocalService

• NetworkService

• NTLM Authentication

• Other Organization

• Remote Interactive Logon

• SChannel Authentication

• This Organization
AdminSDHolder
The Administrator security descriptor holder protects administrative groups through a background
task that computes the set of memberships and checks whether their security descriptors are well-
known protected security descriptors. If the security descriptor of any administrative account does
not match that of AdminSDHolder, the security descriptor is overwritten with the security descriptor

58
of AdminSDHolder. This task is executed only on the domain controller that holds the primary
domain controller (PDC) emulator operations master role.
DNS and the PDC
The first domain controller deployed in every domain is automatically assigned the PDC emulator
role. During the Active Directory installation on this server, Net Logon registers the
_ldap._tcp.pdc._msdcs.DnsDomainName DNS SRV resource record that enables clients to locate the
PDC emulator. Only the PDC emulator of the domain registers this SRV resource record.
DFS and the PDC
When a change is made to a Distributed File System (DFS) domain-based namespace, the change is
made on the domain controller that acts as the PDC emulator master. DFS root servers that host
domain-based namespaces poll the PDC emulator periodically to obtain an updated version of the
DFS metadata, and then they store this metadata in memory. Therefore, if the domain controller
that holds the PDC emulator role is not available, DFS will not function properly.
For more information about DFS and the role of the PDC emulator, see "How DFS Works" on the
Microsoft Web site at http://go.microsoft.com/fwlink/?LinkId=48186.
Raising the Domain Functional Level
In Windows Server 2003, the functional level of a domain or forest defines the set of advanced
Windows Server 2003 Active Directory features that are available in that domain or forest. The
functional level of a domain or forest also defines the set of Windows operating systems that can
run on the domain controllers in that domain or forest.
The PDC emulator is the domain controller targeted when you raise the domain functional level of
an Active Directory domain. Therefore, the PDC emulator must be available to raise the domain
functional level. For more information about Windows Server 2003 domain and forest functional
levels, see “Active Directory Functional Levels Technical Reference.”

Infrastructure Master
The infrastructure master is responsible for updating the group-to-user references whenever the
members of groups are renamed or changed within a domain.
For example, suppose you use Active Directory Users and Computers to add a user to a group
within a single domain. While still connected to the same domain controller, you can view the
group’s membership and see the user you just added. If you then rename the user object (that is,
change its CN attribute) and then display the group membership, you instantly see the user’s new
name in the list of group members.
However, when the user and group are in different domains, there is a lag between the time when
you rename a user object and when a group containing that user displays the user’s new name. This
time lag is inevitable in a distributed system where sites function independently and must rely on
replication for changes to be distributed to all sites.
The domain controller holding the infrastructure master role for the group’s domain is responsible
for updating the cross-domain group-to-user reference to reflect the user’s new name. Periodically,
the infrastructure master scans the domain accounts and verifies the membership of the groups. If
a user account is moved to a new domain, the infrastructure master identifies the user account’s
new domain and updates the group accordingly. After the infrastructure master updates these
references locally, it uses replication to update all other replicas of the domain. If the infrastructure
master is unavailable, these updates are delayed.
Phantom Records

59
Whenever a domain controller contains a reference to an object in another domain, the domain
controller’s local database contains a phantom record for that object. Within the local database, a
reference to the object is represented as the database pointer to (in fact the primary key of) the
phantom record.
For example, if a user in domain A is a member of a group in domain B, the local database of a
domain controller in domain B, holding the group, contains a phantom record for that user. The
group references the user by pointing to the phantom. The infrastructure master is responsible for
updating the phantom records in its local database. If the distinguished name of the user in domain
A changes, the infrastructure master in domain B is responsible for updating its reference to the
user.
Each phantom record in the database contains the referenced object’s GUID, its SID (for references
to security principals), and its distinguished name. If the referenced object is renamed or moved, its
GUID does not change; its SID changes if the move is cross-domain, and its distinguished name
always changes.
The infrastructure master periodically examines the phantom records in its local database. It
compares the data in a phantom record to the corresponding data on the replica of the referenced
object contained in the global catalog. Because global catalog servers receive regular updates for all
objects in the forest through replication, the global catalog’s data is – allowing for replication
latency – always up to date. If the SID or the distinguished name in a phantom record does not
match the SID or the distinguished name of the object in the global catalog, then the infrastructure
master updates its local database with the values from the global catalog. The infrastructure master
then replicates the new values to the other domain controllers in its domain.
A global catalog server holds a complete replica of its own domain and a partial replica of every
other domain in the forest. The complete replica of its own domain includes the GUID, SID, and
distinguished name of the domain’s objects. The partial replica of another domain includes the
GUIDs, SIDs, and distinguished names of all the objects in that domain. Because this data exists in
the global catalog server’s local database, there are no phantom records representing cross-domain
object references on a global catalog server.
Cross-domain object references on a global catalog server are represented in the same way as
intradomain references on any domain controller – that is, as a direct pointer to the referenced
object. Therefore, if the infrastructure master is running on a global catalog server, it never finds
any phantom records representing cross-domain references in its local database. Consequently, the
infrastructure master is unable to replicate any updated references to cross-domain objects to any
other domain controller in its domain. For this reason, the infrastructure master should never run on
a global catalog server in a forest that contains multiple domains.
If every domain controller in a domain is a global catalog server, then no domain controller in the
domain has any phantom records that need to be updated. In this case, the infrastructure master
has no work to do and remains in a dormant state.
For more information about phantom records, see “How the Data Store Works” in this collection.
Top of page
Role Transfer and Seizure
You can reassign an operations master role by transfer or, as a last resort, by seizure.
Role transfer is the preferred method to move an operations master role from one domain controller
to another. When an operations master role is transferred, the previous role holder replicates its
most recent updates to the new role holder to ensure that no information is lost. After the transfer

60
completes, the previous role holder reconfigures itself so that it no longer attempts to perform as
the operations master while the new domain controller assumes those duties. This prevents the
possibility of duplicate operations masters existing on the network at the same time, which can lead
to inconsistencies in the directory.
Role seizure should be used only as a last resort to forcibly remove an operations master role from
one domain controller and assign the role to a different domain controller. Use this process only
when the previous operations master fails and remains out of service for an extended period of
time. For more information about the ramifications of roles seizure, see “Seizing Operations Master
Roles” later in this section.
During a role seizure, the domain controller that assumes the operations master role does not verify
that replication is updated, so recent changes can be lost. Because the previous role holder is
unavailable during the role seizure, it cannot know that a new role holder exists. If the previous role
holder comes back online, it assumes that it is still the operations master. This can result in
duplicate operations master roles on the network, which can lead to corruption of data in the
directory.

Transferring Operations Master Roles


To transfer a role to a new domain controller, ensure that replication is up to date and functioning
properly between the domain controller you are transferring the role from and the domain controller
assuming the new role. This minimizes the time required to complete the role transfer. If replication
is sufficiently out of date, the transfer process will take longer.
When an operations master role is transferred from one domain controller to another, the original
role holder checks that all recent updates have been replicated to the new role holder. In the event
that you must transfer an operations master role, it is best to transfer the role to a domain
controller located in the same site as the original role holder. Changes made by the original role
holder will replicate during the next replication cycle. Therefore, domain controllers located in the
same site are likely to have the most recent information from the original operations master role
holder.
If the new role holder is not located in the same site, changes might still need to replicate before
the role transfer completes (this occurs during the role transfer process). If the new role holder is in
the same site, it is more likely that the changes are already replicated, eliminating the need to
replicate a large amount of changes during the role transfer process. This can significantly decrease
the time required to transfer the role.
Operational Attributes Used to Transfer Roles
Not all attribute values are stored in a directory service. Instead, attribute values that are not
contained in the directory can be calculated when a request for the attribute is made. This type of
attribute is called an operational attribute. Note that this type of attribute is defined in the schema
but it does not contain a value in the directory. Instead, the domain controller that processes a
request for an operational attribute calculates the attribute’s value to answer the client request.
The following operational attributes are used to transfer operations master roles and are located on
the rootDSE (or root DSA-specific entry, the root of the Active Directory tree for a given domain
controller where specific information about the domain controller is kept):

• becomeRidMaster

• becomeSchemaMaster

• becomeDomainMaster

61
• becomePDC

• becomeInfrastructureMaster
During the operation of writing to the appropriate operational attribute on the domain controller that
is receiving the operations master role, the role is removed from the old domain controller and is
assigned to the new domain controller automatically. No manual intervention is required.
Decommissioning a Role Holder
When you use the Active Directory Installation Wizard to decommission a domain controller that
currently holds one or more operations master roles, the wizard reassigns the roles to a different
domain controller. When the wizard is run, it determines whether the domain controller currently
holds any operations master roles. If it detects any operations master roles, it queries the directory
for other eligible domain controllers and transfers the roles to a new domain controller. A domain
controller is eligible to hold a domainwide role if it is a member of the same domain. A domain
controller is eligible to hold a forestwide role if it is a member of the same forest.
You cannot control which domain controller the wizard chooses and the wizard does not indicate
which domain controller receives the roles. Because of this behavior, it is best to transfer the roles
prior to decommissioning the role holder so that you have control over which new domain controller
will hold the role.
Operational Attributes Used to Decommission a Role Holder
If you do not transfer operations master roles before demoting a domain controller, the
GiveAwayAllFsmoRoles operational attribute is written, which triggers the domain controller to
locate other domain controllers to transfer any roles it currently owns. Windows Server 2003
determines which roles the domain controller being decommissioned currently owns and locates a
suitable domain controller to assume these roles by following these rules:

• Locate a server in the same site

• Locate a server to which there is RPC connectivity

• Use a server over an asynchronous transport such as SMTP


In all role transfers, if the role is a domain-specific role, the role can be moved only to another
domain controller in the same domain. Otherwise, any domain controller in the forest is a candidate.

Seizing Operations Master Roles


Role seizure is the act of assigning an operations master role to a new domain controller without the
cooperation of the current role holder. During role seizure, a new domain controller assumes the
operations master role without communicating with the current role holder. Because this can have
an adverse affect in your Active Directory environment, seize operations master roles only as a last
resort and if the current role owner is offline and unlikely to return to service.
Role seizure can create two conditions that can cause problems in the directory. First, the new role
holder starts performing its duties based on the data located in its current directory partition. If
replication did not complete prior to the time when the original role holder went offline, the new role
holder might not receive changes that were made to the previous role holder. This can cause data
loss or data inconsistency in the directory database.
To minimize the risk of losing data to incomplete replication, do not perform a role seizure until
enough time has passed to complete at least one complete end-to-end replication cycle across your
network. Allowing enough time for complete end-to-end replication ensures that the domain
controller that assumes the role is as up-to-date as possible.

62
Second, the original role holder is not informed that it is no longer the operations master role
holder, which is not a problem if the original role holder stays offline. However, if it comes back
online (for example, if the hardware is repaired or the server is restored from a backup), it might
try to perform the operations master role that it previously owned. This can result in two domain
controllers performing the same operations master role simultaneously. Depending on the role that
was seized, the severity of duplicate operations master roles varies from no visible effect to
potential corruption of the Active Directory database. Seize the operations master role to a domain
controller that has the most recent updates from the current role holder to minimize the impact of
the role seizure.
In Windows 2000 Server with Service Pack 3 (SP3) or later and Windows Server 2003, if an
operations master that has been taken offline is intentionally or unintentionally returned to service,
it must successfully replicate inbound changes on the directory partition that replicates and
maintains that operations master state before resuming its previously held role. The directory
partitions that maintain each operations master are defined in the following table.
Operations Masters and Corresponding Directory Partitions

Operations Master Role Directory Partition

Schema Master Schema partition

Domain Naming Master Configuration partition

Primary Domain Controller (PDC) Domain partition for the operations master role owner’s
Emulator domain

RID Master Domain partition for the operations master role owner’s
domain

Infrastructure Master Domain partition for the operations master role owner’s
domain
By waiting a full replication cycle, the domain controller can determine if another operations master
exists before it brings itself back online. Successful replication must occur before dependent
operations can be performed. When the previous role holder receives the current environmental
state from another replica through inbound replication, it will update its copy of the database so
that it no longer hosts the role in question. This is to prevent more than one domain controller from
holding the same operations master role in each domain or forest.
Operations master roles that are hosted by a single domain controller in that role’s domainwide or
forestwide replication scope do not have to satisfy the initial synchronization requirement because
the domain controller has no replication partners. Initial synchronization requirements only exist
when a current operations master role owner’s hasMasterNC attribute contains references to more
than one domain controller. The hasMasterNC attribute is part of a domain controller’s NTDS
settings object in the CN=configuration partition of an operations master.
Ramifications of Role Seizure
If a role is seized, the new role holder is configured to host the operations master role with the
assumption that you do not intend to return the previous role holder to service. Use role seizure
only when the previous role holder is not available and you need the operations master role to keep
the directory functioning. Because the previous role holder is not available during a seizure, you

63
cannot reconfigure the previous role holder and inform it that another domain controller is now
hosting the operations master role.
To reduce risk, perform a role seizure only if the missing operations master role unacceptably
affects performance of the directory. Calculate the effect by comparing the impact of the missing
service provided by the operations master to the amount of work that is needed to bring the
previous role holder safely back online after you perform the role seizure.
Active Directory continues to function when the operations master roles are not available. If the role
holder is offline only for a short period, you might not need to seize the role to a new domain
controller. Remember that returning an operation master to service after the role is seized can have
dire consequences if it is not done properly. The following table shows the consequences of an
unavailable operations master role and restoring a role holder after the role has been seized.
Operations Master Role Functionality Risk Assessment

Operations Consequences if Role is Risk of Improper Recommendation for


Master Role Unavailable Restoration Returning to Service
After Seizure

Schema You cannot make changes Conflicting changes can be Not recommended. Can lead
master to the schema. introduced to the schema if to a corrupted forest.
both schema masters
attempt to modify the
schema at the same time.
This can result in a
fragmented schema.

Domain You cannot add or remove You cannot add or remove Not recommended. Can lead
naming domains from the forest, domains or application to data corruption.
master add or remove application directory partitions, or
directory partitions, or clean-up metadata. Domains
perform domain rename and application directory
operations. partitions might appear as
though they are still in the
forest even though they are
not.

PDC emulator You cannot change Password validation can Allowed. User authentication
passwords on clients that randomly pass or fail. can be erratic for a time,
do not have Active Password changes take but no permanent damage
Directory client software much longer to replicate occurs.
installed. No replication to throughout the domain.
Windows NT 4.0 backup
domain controllers.

Infrastructure Delays displaying updated Displays incorrect user Allowed. May impact the
master group membership lists in names in group membership performance of the domain
the user interface when you lists in the user interface controller hosting the role,
move users from one group after you move users from but no damage occurs to

64
Operations Consequences if Role is Risk of Improper Recommendation for
Master Role Unavailable Restoration Returning to Service
After Seizure

to another within a single one group to another. the directory.


domain.

RID master Eventually, domain Duplicate RID pools can be Not recommended. Can lead
controllers cannot create allocated to domain to data corruption.
new directory objects as controllers, resulting in data
each of their individual RID corruption in the directory.
pools is depleted. This can lead to security
risks and unauthorized
access.
Operational Attributes Used to Seize Roles
When you seize an operations master role from an existing computer, the fsmoRoleOwner
attribute is modified on the object that represents the root of the data directly, bypassing
synchronization of the data and graceful transfer of the role.
The fsmoRoleOwner attribute of each of the following objects is written with the distinguished
name of the NTDS Settings object (the data in the Active Directory that defines a computer as a
domain controller) of the domain controller that is taking owner’ship of that role:
Schema master
LDAP://CN=Schema,CN=Configuration,DC=Contoso,DC=Com
Domain naming master
LDAP://CN=Partitions,CN=Configuration,DC=Contoso,DC=Com
RID master
LDAP://CN=Rid Manager$,CN=System,DC=Contoso,DC=Com
PDC emulator
LDAP://DC=Contoso,DC=Com
Infrastructure master
LDAP://CN=Infrastructure,DC=Contoso,DC=Com
As replication of this change starts to spread, other domain controllers learn of the operations
master role change.
For example, if Server1 is the PDC emulator in the Contoso.com domain and is decommissioned and
the administrator is unable to decommission the computer properly, Server2 needs to be forcibly
assigned the PDC emulator role. After the role is seized, the value
CN=NTDS Settings,CN=Server2,CN=Servers,CN=Default-First-Site-
Name,CN=Sites,CN=Configuration,DC=Contoso,DC=Com
is present on the LDAP://DC=Contoso,DC=com object.
Top of page
Network Ports Used by Operations Masters
The following ports are used by all domain controllers.
Port Assignments for Operations Masters

Service Name UDP TCP

LDAP 389 389

65
Service Name UDP TCP

LDAP 686 (Secure Sockets Layer [SSL])

RPC/REPL 135 (endpoint mapper)

NetLogon 137

Kerberos 88 88

DNS 53 53

SMB over IP 445 445


Top of page

How the Global Catalog Works


Updated: March 28, 2003
In this section

• Global Catalog Architecture

• Global Catalog Protocols

• Global Catalog Interfaces

• Global Catalog Physical Structure

• Global Catalog Processes and Interactions

• Network Ports Used by the Global Catalog

• Related Information
The global catalog provides a central repository of domain information for a forest by storing partial
replicas of all domain directory partitions. These partial replicas are distributed by multimaster
replication to all global catalog servers in a forest.
The global catalog makes the directory structure within a forest transparent to users who perform a
search. For example, if you search for all printers in a forest, a global catalog server processes the
query in the global catalog and then returns the results. Without a global catalog server, this query
would require a search of every domain in the forest.
During an interactive domain logon, the domain controller authenticates the user by verifying the
user’s identity, and also provides authorization data for the user’s access token by determining all
groups of which the user is a member. Because the global catalog is the forestwide location of the
membership of all universal groups, access to a global catalog server is a requirement for Active
Directory authentication. As such, an ideal distribution of the global catalog is to have at least one
global catalog server in each Active Directory site. When a global catalog server is available in a
site, the authenticating domain controller is not required to communicate across a WAN link to
retrieve global catalog information. On domain controllers that are running Windows Server 2003,
universal group membership can be cached so that the domain controller must connect to a global
catalog server across a WAN link only for initial logons in the site; thereafter, universal group
membership can be checked from a local cache. Search clients, however, must always connect to
global catalog servers across the WAN if no global catalog server exists in the client’s site.

66
This subject describes the functionality of the global catalog and the replication of objects to global
catalog servers in an Active Directory forest.

Global Catalog Architecture


Global catalog server architecture differs from non-global catalog server architecture in its use of
the nonstandard LDAP port 3268, which directs queries to the global catalog. Queries over this port
are formed the same way as any LDAP query, but Active Directory varies the search behavior
according to the port that is used: queries over port 3268 target the global catalog directory
partitions (including the read-only domain directory partitions and the one writable domain directory
partition for which the server is authoritative), and queries over port 389 target only the writable
domain, configuration, application, and schema directory partition replicas stored by the global
catalog server in its role as a domain controller. In addition, domain controllers use the proprietary
replication interface when they contact global catalog servers to retrieve universal group
membership during client logons.
Search clients include Exchange Address Book clients, which use the client MAPI provider
Emsabp32.dll to look up e-mail addresses in the global catalog. The client-side MAPI provider
communicates with the server through the proprietary Name Service Provider Interface (NSPI) RPC
interface.
Windows NT clients use Net APIs to communicate with the Security Accounts Manager (SAM) on the
primary domain controller (PDC) emulator. The PDC emulator, a domain controller operations
master role in Windows 2000 Server and Windows Server 2003 domains, manages search and
replication communication with clients that are running Windows NT.
The relationships between these architectural components are shown in the following diagram.
Descriptions for the major components are provided in the subsequent table.
Global Catalog Architecture

67
For a more detailed description of LDAP and replication client-server architecture, see “How the
Active Directory Replication Model Works.”
Global Catalog Architecture Components

Component Description

Clients Global catalog clients, including search clients and Address Book clients as well as
domain controllers performing replication and universal group security identifier
(SID) retrieval during logon.

Network The physical IP network.

Interfaces LDAP over port 389 for read and write operations and LDAP over port 3268 for
global catalog search operations. NSPI and replication (REPL) use proprietary RPC
protocols. Retrieval of universal group membership occurs over RPC as part of the
replication RPC interface. Windows NT 4.0 clients and backup domain controllers
(BDCs) communicate with Active Directory through the Security Accounts Manager
(SAM) interface.

Directory The directory service component that runs as Ntdsa.dll on each domain controller,
System Agent providing the interfaces through which services and processes gain access to the
(DSA) directory database.

Extensible The directory service component that runs as Esent.dll. ESE manages the tables of
Storage Engine records that comprise the directory database.
(ESE)

Ntds.dit The Active Directory data store.


database file
Top of page
Global Catalog Protocols
The following diagram shows the four interfaces into Active Directory and the protocols that
package the data according to their specific applications. These protocols and interfaces are the
same for all domain controllers and are not specific to global catalog servers. The significance for
the global catalog server is that domain controllers use the proprietary RPC replication protocol not
only for replication, but also to contact the global catalog server when retrieving universal group
membership information and when updating the group membership cache when Universal Group
Membership Caching is enabled.
Global Catalog Protocols

68
The protocols are described in the following table.
Global Catalog Protocols

Protocol Description

Lightweight The primary directory service protocol that specifies directory communications. It
directory access runs directly over TCP/IP, and it can also run over User Datagram Protocol (UDP)
protocol (LDAP) connectionless transports (UDP access is primarily used by the domain controller
Locator process and can also be used to query the rootDSE). Clients use LDAP to
query, create, update, and delete information that is stored in a directory service
over a TCP connection through the TCP default port 389. Global catalog clients can
use LDAP to query Active Directory over a TCP connection through the TCP
port 3268. Active Directory supports LDAP v2 (RFC 1777) and LDAP v3 (RFC
2251). LDAP v3 is an industry standard that can be used with any directory
service that implements the LDAP protocol. LDAP is the preferred and most
common way of interacting with Active Directory.

Remote Protocol for replication (REPL) and domain controller management


procedure call communications (including global catalog server interactions), NSPI address book
(RPC) communications, and SAM-related communications. RPC is a powerful, robust,
efficient, and secure interprocess communication (IPC) mechanism that enables
data exchange and invocation of functionality residing in a different process. That
different process can be on the same computer, on the local area network (LAN),
or across the Internet.

Simple mail Protocol for replication communications when a permanent, “always on” network
transfer protocol connection does not exist between two domain controllers. SMTP is used to
(SMTP) transport and deliver messages based on specifications in Request for Comments
(RFC) 821 and RFC 822. SMTP can replicate only the configuration and schema
directory partitions and global catalog read-only replicas (not writable domain
data).
For more information about Active Directory protocols, see “How the Data Store Works.”
Top of page
Global Catalog Interfaces

69
Interfaces for global catalog servers are the Active Directory data store interfaces, shown in the
previous figure and described in the following table.
Global Catalog Data Store Interfaces

Interface Description

LDAP The primary interface for Active Directory access. Directory clients use LDAP v3 to
connect to the DSA through the LDAP interface. The LDAP interface is part of
Wldap32.dll. LDAP v3 is backward compatible with LDAP v2.

REPL The replication management interface that provides functionality for finding data about
domain controllers, converting the names of network objects between different formats,
manipulating service principal names (SPNs) and DSAs, and managing replication of
servers.

NSPI/MAPI Name Service Provider Interface (NSPI) by which Messaging API (MAPI) clients access
Active Directory. Messaging clients gain access to Active Directory by using MAPI address
book providers. For compatibility with existing messaging clients, Active Directory
supports the NSPI/RPC address book provider, which provides directory access, for
example, to find the telephone number of a user.

SAM Proprietary interface for connecting to the DSA on behalf of clients that run
Windows NT 4.0 or earlier. These clients use Windows NT 4.0 networking APIs to connect
to the DSA through SAM. Replication with Windows NT 4.0 backup domain controllers
(BDCs) occurs through the SAM interface as well.
Note
The NSPI (MAPI) interface is provided only for support of legacy Microsoft Outlook clients.

Development against this interface is no longer supported.
For more information about Active Directory data store interfaces, see “How the Data Store Works.”
Top of page
Global Catalog Physical Structure
Active Directory is a distributed directory service in which data is stored as replicas on multiple
domain controllers to provide a virtual database that maintains consistency through Active Directory
replication. Domain controllers provide the domainwide distribution of directory data. Global catalog
servers provide the forestwide distribution of directory data.

Global Catalog Partial Attribute Set


In its role as a domain controller, a global catalog server stores one domain directory partition that
has writable objects with a full complement of writable attributes. In its role as global catalog
server, it also stores the objects of all other domain directory partitions in the forest as read-only
objects with a partial set of attributes. The set of attributes that are marked for inclusion in the
global catalog are called the partial attribute set (PAS). An attribute is marked for inclusion in the
PAS as part of its schema definition.
Objects in the schema that define an attribute are attributeSchema objects, which themselves
have an attribute isMemberOfPartialAttributeSet. If the value of that attribute is TRUE, the
attribute is replicated to the global catalog. The replication topology for the global catalog is
generated automatically by the Knowledge Consistency Checker (KCC), a built-in process that

70
implements a replication topology that is guaranteed to deliver the contents of every directory
partition to every global catalog server.
The attributes that are replicated to the global catalog by default include a base set defined by
Microsoft. Administrators can use the Microsoft Management Console (MMC) Active Directory
Schema snap-in to specify additional attributes to meet the needs of their installation. In the Active
Directory Schema snap-in, you can select the Replicate this attribute to the global catalog
check box to designate an attributeSchema object as a member of the PAS, which sets the value
of the isMemberOfPartialAttributeSet attribute to TRUE.

Domain Controller and Global Catalog Server Structure


The physical representation of global catalog data is the same as all domain controllers: the Ntds.dit
database stores object attributes in a single file. On a domain controller that is not a global catalog
server, the Ntds.dit file contains a full, writable replica of every object in one domain directory
partition for its own domain, plus the writable configuration and schema directory partitions.
Note
The schema directory partition is writable only on the domain controller that is the schema

operations master for the forest.
The following diagram shows the physical representations of the global catalog as a forestwide
resource that is distributed as a database on global catalog servers.
Global Catalog Physical Structure

As shown in the preceding diagram, a global catalog server stores a replica of its own domain (full
and writable) and a partial, read-only replica of all other domains in the forest. All directory

71
partitions on a global catalog server, whether full or partial, are stored in the directory database file
(Ntds.dit) on that server. That is, there is not a separate storage area for global catalog attributes;
they are treated as additional information in the directory database of the global catalog server.
The following table describes the physical components of the diagram.
Global Catalog Server Physical Components

Physical Description
Component

Active Directory The set of domains that comprise the Active Directory logical structure and that are
forest searchable in the global catalog.

Domain Server that stores one full, writable domain directory partition plus forestwide
controller configuration and schema directory partitions. Global catalog servers are always
domain controllers.

Global catalog Domain controller that stores one full, writable domain plus forestwide
server configuration and schema directory partitions, as well as a partial, read-only replica
of all other domains in the forest.

Ntds.dit Database file that stores replicas of the Active Directory objects held by any domain
controller, including global catalog servers.
Top of page
Global Catalog Processes and Interactions
In addition to its activities as a domain controller, the global catalog server supports the following
special activities in the forest:
User logon: Domain controllers must contact a global catalog server to retrieve any SIDs of

universal groups that the user is a member of. Additionally, if the user specifies a logon name in
the form of a UPN, the domain controller contacts a global catalog server to retrieve the domain
of the user.
Universal and global group caching and updates: In sites where Universal Group Membership

Caching is enabled, domain controllers that are running Windows Server 2003 cache group
memberships and keep the cache updated by contacting a global catalog server.
Global catalog searches: Clients can search the global catalog by specifying port 3268 or by using

search applications that use this port. Search activities include:
Validation of references to non-local directory objects. When a domain controller holds a

directory object with an attribute that references an object in another domain, this reference is
validated by contacting a global catalog server.
Exchange Address Book lookups: Exchange 2000 Server and Exchange Server 2003 use Active

Directory as the address book store. Outlook clients query the global catalog to locate Address
Book information.
Global catalog server creation and advertisement: Global catalog servers register global-catalog-

specific service (SRV) resource records in DNS so that clients can locate them according to site. If
no global catalog server is available in the site of the user, a global catalog server is located in
the next closest site, according to the cost matrix that is generated by the KCC from site link cost
settings.
Global catalog replication: Global catalog servers must either have replication partners for all

domains or be able to replicate with another global catalog server. When changes to the PAS
occur on, and are replicated between, domain controllers that are running Windows Server 2003,

72
only the updated attributes are replicated. Changes to the PAS that occur on domain controllers
that are running Windows 2000 Server prompt a full synchronization of the entire global catalog
(all attributes in the PAS are replicated anew to all global catalog servers). For more information
about PAS replication, see “Global Catalog Replication” later in this subject.

User Logon
When a domain user logs on interactively to a domain, the contacted domain controller must
retrieve information from a global catalog server under the following conditions:
The user's domain is a Windows 2000 native mode domain or a Windows Server 2003 domain at

the Windows 2000 native or Windows Server 2003 functional level. In this case, the user might
belong to a universal group whose object is stored in a different domain.
The users logon name is a user principal name (UPN), which has the format

sAMAccountName@DNSDomainName. In this case, the DNS domain suffix is not necessarily the
user’s domain and the identity of the user’s domain must be retrieved from a global catalog
server.
Universal Group SID Retrieval
A universal group is a security group that is available in Windows 2000 native mode in a
Windows 2000 domain, and at the Windows 2000 native and Windows Server 2003 domain
functional levels in a Windows Server 2003 domain. During interactive user logon, the
authenticating domain controller retrieves the SIDs that the user’s workstation requires to build the
access token for the user. To retrieve the SIDs of all universal groups to which the user belongs, the
authenticating domain controller must contact a global catalog server. If a global catalog server is
not available in the site when a user logs on to a domain in which universal groups are available,
the computer will use cached credentials to log the user on if the user has previously logged on to
the domain from the same workstation. If the user has not previously logged on to the domain from
the same computer, the user can log on to only the local computer.
Of the three group types that are used to allow access to resources in a forest (domain local, global,
and universal), only universal groups have their membership replicated to the global catalog. The
values of the member attribute of universal group objects that exist in all domains must be
available to an authenticating domain controller because:
Universal groups can contain members, including individual user accounts and global groups,

from any domain in the forest. Therefore, the user who is logging on might have a membership in
a universal group that exists in a different domain.
Universal groups can be added to access control lists in any domain in the forest. Therefore, the

user might have permissions on objects in this domain by virtue of membership in a universal
group that exists in a different domain.
Contrast this requirement with the domain local group membership. Domain local groups can also
have members from other domains; however, domain local groups can be added to access control
lists in only the domain in which they are created. Therefore, a domain controller can always
determine a user’s membership in all domain local groups that are required for authorization in its
own domain.
For global groups, although these groups can be added to access control lists in any domain, they
can contain accounts from only the domain in which they are created. Therefore, the global group
membership of the user who is logging on can always be checked locally. However, global groups
can be members of universal groups that exist in different domains. When group nesting is in effect
(which has the same availability criteria as availability of universal groups), being a member of a

73
global group that is itself a member of a universal group can give the user access to resources other
than those allowed by membership in the global group alone.
During the logon process, the authenticating domain controller retrieves a list of global group SIDs
from the user’s domain. If universal groups are available in the user’s domain, the domain controller
passes the list of global group SIDs to the nearest global catalog server. The global catalog server
responds as follows:
Enumerates the member attribute of all universal groups in the forest and adds universal group

SIDs to the user’s SID list as follows:

• All universal groups that contain the user’s SID.


All universal groups that contain the SID of any of the global groups in the user’s SID

list.
Returns the list, including both universal group SIDs and global group SIDs, to the domain

controller.
The authenticating domain controller sends the list of SIDs that is returned from the global catalog
server to the user’s computer, along with domain local group SIDs from the user’s domain. The
user's local computer, which creates the access token for the user, adds the returned SIDs to the
access token.
For more information about how domain controllers cache group membership, see “Universal Group
Membership Caching” later in this subject. For more information about how SIDs are retrieved and
used in access tokens, see “How Access Tokens Work.” For more information about the
authentication process, see “How the Kerberos Version 5 Authentication Protocol Works.” For more
information about the logon process, see “How Interactive Logon Works.”
UPN Logon
A global catalog server resolves UPNs when the authenticating domain controller does not have
knowledge of the account. For example, if a users account is located in noam.corp.contoso.com and
the user logs on with a UPN of JSmith@contoso.com, the domain name in the UPN suffix does not
match the user’s domain. In the Windows Server 2003 and Windows 2000 Server logon screen, you
can either type your user name (sAMAccountName) and select the domain name from the drop-
down list, or you can use a UPN. If you use a UPN, as soon as you type the @ sign, the domain list
becomes unavailable. In this case, domain controllers running Windows Server 2003 or
Windows 2000 Server take the domain name from the UPN suffix. The UPN suffix is often (usually)
different from the home domain of the user. In this case, the responding domain controller must
contact a global catalog server to determine the domain of the user.
The following diagram shows the general sequence that occurs when a user logs on to a domain by
using a UPN.
UPN Logon and Global Catalog Interaction

74
1. Because the domain of the user is not necessarily the same as the UPN suffix, the domain
controller Locator contacts the nearest domain controller according to the site of the client
computer.
2. The contacted domain controller determines whether the DNS name in the UPN suffix is the
domain for which the domain controller is authoritative.
If the domain name in the UPN suffix matches the domain of the domain controller, the

domain controller attempts to process the client authentication. If the user name is not
found, the domain controller contacts a global catalog server.
If the DNS name in the UPN suffix does not match the domain of the domain controller, the

domain controller contacts a global catalog server.
3. The global catalog server uses the userPrincipalName attribute of the user object to look up
the distinguished name of the user object, and returns this value to the domain controller.
4. The domain controller extracts the domain name from the distinguished name of the user and
returns this value to the client.
5. The client requests a domain controller for its domain.
If a Windows Server 2003 forest trust exists between two forests, the default form of a UPN
(sAMAccountName@DnsDomainName) is used for authentication in a different forest. If you create
a forest trust and the second-level DNS domain name (for example, contoso.com) exists in both
forests, the New Trust Wizard detects the conflict and only one forest is authoritative for that name
suffix.
If NTLM-based trust relationships are created between the domains in different forests—as is
required for a trust relationship between a domain in an Active Directory forest and a
Windows NT 4.0 domain, between domains in two Windows 2000 Server forests, or between a
domain in a Windows 2000 Server forest and a domain in a Windows Server 2003 forest—a UPN
cannot be used to log on to a trusting domain that is outside the forest because the UPN is resolved
in the global catalog of only one forest.
Logon Process in a Single-Domain Forest
In a single-domain forest, all domain controllers can service all logon requests, including UPN
logons, without requiring a global catalog server. However, only domain controllers that are
configured as global catalog servers can respond to LDAP traffic over port 3268.

75
For more information about the logon process, see “Interactive Logon Technical Reference.” For
more information about forest trust relationships, see “Domain and Forest Trusts Technical
Reference.”

Universal Group Membership Caching


In scenarios where remote sites do not have a global catalog server, the need to contact a global
catalog server over a potentially slow WAN connection can be problematic. On domain controllers
that are running Windows Server 2003, the Universal Group Membership Caching feature is
available by default (does not require a specific functional level or domain mode), although it must
be enabled on a per-site basis.
When enabled, this feature allows a domain controller that is running Windows Server 2003 to
cache global group SIDs and universal group SIDs that it retrieves from a global catalog server so
that future logons do not require contacting a global catalog server. This storage is referred to as
“caching,” but the memberships are actually stored in a non-volatile Active Directory value. The
memberships that are written to this value are not lost as a result of a restart or power outage. For
the purposes of this discussion, the term “cache” refers to this value. Group membership is cached
for user accounts and computer accounts.
Caching group memberships in branch site locations has the following potential benefits:
Faster logon times because authenticating domain controllers no longer need to contact a global

catalog server to obtain universal group membership.
Higher availability because logon is still possible if the WAN link to the site of the global catalog

server is unavailable.
No need to upgrade the hardware of existing domain controllers to handle the extra system

requirements necessary for hosting the global catalog.
Minimized network bandwidth usage because a branch site domain controller does not have to

replicate all of the objects located in the global catalog.
Enabling Universal Group Membership Caching
Universal Group Membership Caching can be enabled for a site by using the Active Directory Sites
and Services MMC snap-in to edit the properties of the NTDS Site Settings object (CN=NTDS Site
Settings,CN=TargetSiteName,CN=Sites,CN=Configuration,CN=ForestRootDomain). In Active
Directory Sites and Services, if you click a site object, the NTDS Site Settings object for the site is
visible in the details pane. Right-click the NTDS Site Settings object and then click Properties. In
the NTDS Site Settings Properties dialog box, click Enable Universal Group Membership
Caching.
Note
The options attribute of the NTDS Site Settings object, which controls this feature, has a default

value of 0. When only the Universal Group Membership Caching option is enabled, the attribute
value is 32. However, this attribute is a bit field, so its full functionality is derived from computing
a bitwise AND of all of the bits that are set.
When the feature is enabled for a site, domain controllers that are running Windows Server 2003 in
the site cache both universal group membership and global group membership for first-time logons
and keep the cache updated thereafter. The feature allows specifying the site from which to retrieve
group membership. In the NTDS Site Settings Properties dialog box, you can use the Refresh
cache from list to specify the site to use. The msDS-Preferred-GC-Site attribute stores the
distinguished name of the specified site and controls this setting.
If no site is specified, the closest-site mechanism uses the cost setting on the site link to determine
which site has the least-cost connection to contact a global catalog server.

76
If the user has not logged on to the domain previously and a global catalog server is not available,
the user can log on to only the local computer.
Note
If a user is the Administrator in the domain (Builtin Administrator account), the user can always

log on to the domain, even when a global catalog server is not available.
For more information about closest-site mechanisms, see “How Active Directory Replication
Topology Works” and “How DNS Support for Active Directory Works.”
Global Group and Universal Group SID List
Although the feature is named Universal Group Membership Caching, in fact the domain controller
caches both universal group SIDs and global group SIDs. As described in “Universal Group SID
Retrieval” earlier in this subject, the authenticating domain controller retrieves the list of universal
group SIDs from the global catalog server. The domain controller sends the list of global group SIDs
to the global catalog server so that universal group membership can be ascertained for these
groups as well as for the user or computer account itself. Therefore, both global group SIDs and
universal group SIDs are included in the list of SIDs that is returned from the global catalog server.
The domain controller caches the entire list of SIDs. When the account attempts subsequent logons
in the site, the universal group and global group SIDs for the account are obtained from the cache.
Domain local group SIDs are always obtained from the local directory database.
After a security principal has logged on in a site that has Universal Group Membership Caching
enabled, the group cache for the account on the authenticating domain controller is immediately
populated. However, it can take up to eight hours for other domain controllers in the same site to
populate the group cache. During this time, if the account is authenticated by a domain controller
that has not populated the account’s cache, a global catalog server must be contacted for the logon
to proceed. After eight hours, all domain controllers that are running Windows Server 2003 in the
site can process all subsequent logons by using the cached membership.
Group Cache Communication Between Domain Controllers
Although the cache itself is not replicated between domain controllers, knowledge that an account
has logged on in the site is replicated to all other domain controllers in the domain by means of a
site affinity attribute (msDS-Site-Affinity) of the respective user or computer object. This
multivalued attribute identifies the sites in which the account has logged on and includes a
timestamp for each site. The domain controllers in the domain use the replicated site affinity
attribute to determine which account memberships are cached for their site, and then independently
populate their copy of each account’s cache by contacting a global catalog server. For more
information about how the cache is populated, see “How the Cache is Populated at First Logon” and
“How the Cache is Refreshed” later in this subject. For more information about how this attribute is
updated, see “Group Cache Storage” later in this subject.
Cache Refresh and the Availability of Group Changes
The caching of group SIDs in this way, including both universal group and global group SIDs, can
lead to unexpected behavior when an administrator modifies the universal group or global group
membership for an account and expects the change to be reflected on all domain controllers in the
domain after the first replication cycle following the change. Even if the change is made on the
domain controller that authenticates the account or has been received through replication, the
membership in the cache is used instead of the value of the object member attribute. By default,
domain controllers update the membership cache for accounts in the site every eight hours. As a
result, changes to the global group or universal group membership of an account can take up to

77
eight hours to be reflected on domain controllers in a site where Universal Group Membership
Caching is enabled.
To refresh the cache, domain controllers running Windows Server 2003 send a universal group
membership confirmation request to a global catalog server. There is no limit to the number of
accounts that can be cached, but a maximum of 500 account caches can be updated during any
cache refresh.
Note
Universal Group Membership Caching can be enabled in a site that has domain controllers that

are running Windows 2000 Server. If Universal Group Membership Caching is enabled in such a
site, users might experience inconsistent group membership, depending on which domain
controller authenticates them. For this reason, it is recommended that you either upgrade all
domain controllers that are running Windows 2000 Server to Windows Server 2003 when group
caching is enabled in a site, or remove them.
Because the group memberships are cached, there is a period of latency before group membership
changes are reflected in an account’s access token. When group membership changes, the changes
are not reflected in the access token until the following events have occurred (in order):
1. The changes are replicated to the global catalog server that is used for the refresh of the cache.
2. The cache on the domain controllers in the account’s site is refreshed. Although the cache
refresh is not a replication event, the process uses the site link schedule. Therefore, a closed
site link schedule postpones the cache refresh until the schedule opens.
3. The user has logged off and back on again (user account is authenticated) or the computer has
restarted (computer account is authenticated).
When an access token is created during logon, the token contents are static until that user logs off
and logs on again. Furthermore, as long as Universal Group Membership Caching is enabled, an
account’s memberships are cached, and the cache entry has not expired, the cache entry is used
during logon. If changes have been made to group membership and the refresh task has not run,
the changes are not reflected until either the cache entry expires or the refresh task runs and
processes the cache entry.
The length of the latency period depends on when the next refresh task is scheduled to run. The
refresh task reschedules itself for its next refresh during each current refresh run, as follows:
Beginning with the current time plus the registry-configured refresh interval, the domain

controller consults the replication schedule on the site link that connects its site to the site of the
closest (or designated) global catalog server.
If the site link schedule allows replication at the projected time, the refresh task is scheduled to

run at this time.
If the site link schedule does not allow replication at the projected time, the scheduling algorithm

steps forward one minimum replication interval (15 minutes) and checks the schedule again.
This process is repeated until an open replication interval is found. If no open interval is found in

the seven-day schedule, the refresh task is scheduled to run by using a time calculated as the
current time plus the registry-configured refresh interval. In this case, event ID 1671 is logged as
a warning message that indicates the group membership cache refresh task was unable to find
the next available time slot of connectivity to the site of the global catalog server.
If faster updates are required, an administrator can initiate a cache refresh manually on the domain
controllers in the user’s site. For more information about refreshing the user cache, see “Registry
Settings that Affect Cache Refresh and Site Affinity Limits” later in this subject.
Determining the Site to Use for Populating and Refreshing the Cache

78
You can designate a site from which to initially populate and subsequently refresh the group
membership cache. The Universal Group Membership Caching feature user interface (UI) contains
an option to select a site from the list of existing sites. When a site has been selected and the cache
on a domain controller must be populated for the first time or updated, the domain controller
contacts a global catalog server in the designated site. If no site is designated, site link costs are
evaluated to determine the lowest-cost site that contains a global catalog server. The site link cost
matrix is supplied by the Intersite Messenger (ISM) service.
The UI that you use to designate a preferred site for cache refresh does not check for the presence
of a global catalog server in the selected site. Therefore, it is possible to designate a refresh site
that does not contain a global catalog server. In this case, or in any case where a refresh site is
designated but a global catalog server does not respond, the domain controller uses the site link
cost matrix and logs event ID 1668 in the Directory Service event log, which indicates that the
group membership cache refresh task did not locate a global catalog in the preferred site, but was
able to find a global catalog in the following available site. The event lists the named preferred site
and the actual site that was used.
Group Cache Storage
Cached group membership is stored as additional attributes of user and computer objects. Three
new attributeSchema objects were added to the Windows Server 2003 schema for the user object
class (and inherited by the computer object class) to support this feature:
msDS-Cached-Membership: (cached membership) A binary large object that contains both

universal and global group memberships (the group SIDs) for the user. This attribute has the
following characteristics:

• Is single valued.

• Is not indexed.

• Can be deleted.
Cannot be

written.
• Is not replicated.
msDS-Cached-Membership-Time-Stamp: (last refresh time) Contains the time that the

cached membership was last updated, either by the first logon or by a refresh. This attribute is
used for the “staleness” check. The maximum period that is tolerated when using a cached group
membership is called the staleness interval. The staleness interval, set in the registry as 7 days,
is measured against the current time and the last refresh time. If the timestamp indicates that
the cache is older than the staleness interval allows, the cached membership is invalidated and a
global catalog server is required for logon. This attribute has the following characteristics:
Is large integer, time

valued.
• Is indexed.

• Can be deleted.

• Cannot be written.

• Is not replicated.
msDS-Site-Affinity: Identifies the site(s) where the account has logged on plus a timestamp

that indicates the start time for the cached logon in the respective site. Presence of a value in
this attribute causes automatic population of group memberships and refresh at every refresh

79
interval. When a domain controller refreshes its cached memberships (every 8 hours by default),
the timestamp is used for removing accounts from the cache that have not logged on within the
site affinity time limit (the cache expires). To avoid replication of this attribute every time the
account logs on, the timestamp is updated only when the age exceeds 50 percent of the age limit
that is set in the registry (180 days, by default) and one of the following actions occurs:

• The account logs on and is authenticated by a domain controller.


A user changes his or her account password. This update ensures that users who go for

extended periods without logging on have their site affinity values updated.
This attribute has the following characteristics:

• Is multivalued.

• Is indexed.

• Can be deleted.

• Can be written.

• Is replicated.
Note
You can use ADSI Edit in Windows Support Tools to clear the cached entries for an account by

deleting the values in msDS-Cached-Membership and msDS-Cached-Membership-Time-
Stamp from the user or computer object. The attribute values are repopulated at the next logon
or cache refresh, whichever comes first.
Registry Settings that Affect Cache Refresh and Site Affinity Limits
Registry settings on each domain controller determine the limits that are imposed on group
membership caches. Entries under
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters\ can be
used to manage the cache, as shown in the following table.
Changes to these registry settings take effect the next time the refresh task runs.
Note
In the following table, some of the entry names contain the string “(minutes)”. Note that this

string is part of the entry name and must be included when creating the entry. For example:
• The value name Cached Membership Refresh Interval (minutes) is correct.

• The value name Cached Membership Refresh Interval is incorrect.


Registry Entries Used to Configure Caching Behavior

Registry Entry Type Default Notes


Value

Cached Membership Site Stickiness DWORD 172800 Defines how long the site affinity
(minutes) (Value is in will remain in effect. The site
minutes. affinity value is updated when
This setting half of the period defined by this
equals 180 value has expired. If an account
days) has not logged on with a domain
controller for a period of one half
of this value or longer, the
account is removed from the list
of accounts whose memberships

80
Registry Entry Type Default Notes
Value

are being refreshed. The default


value is recommended.

Cached Membership Staleness DWORD 10080 Determines the maximum


(minutes) (Value is in staleness value when using
minutes. cached group membership. The
This setting account cannot log on if the
equals 7 cached membership list is older
days) than the staleness value and if
no global catalog server is
available. The default value is
recommended.

Cached Membership Refresh Interval DWORD 480 Defines the length of time
(minutes) (Value is in between group membership
minutes. cache refreshes. This value
This setting should be changed to
equals 8 synchronize once a day (1440
hours) minutes). For dial-up
connections, you might want a
higher value than 24 hours.
Lowering the value to increase
the frequency of cache refresh is
not recommended because it
causes increased WAN traffic,
potentially defeating the purpose
of Universal Group Membership
Caching.

Cached Membership Refresh Limit DWORD 500 Defines the maximum number of
user and computer accounts that
are refreshed. Increase this
setting only if event ID 1669
occurs in the event log or you
have more than 500 users and
computers in a branch. If the
number of users and computers
in a branch exceeds 500, a
general recommendation is to
either place a global catalog
server in the branch or increase
the Cached Membership Refresh
Limit above 500. Be aware that

81
Registry Entry Type Default Notes
Value

increasing the limit might incur


more WAN traffic than that
caused by global catalog update
traffic.

SamNoGcLogonEnforceKerberosIpCheck DWORD 0 By default, allows site affinity to


be tracked for Kerberos logons
that originate outside the site. A
value of 1 configures SAM so it
does not give site affinity to
Kerberos logon requests that
originate outside the current
site. This option should be
configured to 1 on domain
controllers in the branch-sites to
prevent logon requests from
outside the site being given
affinity for the local site. This
setting prevents an account’s
affinity from being changed
during the logon process when
connecting to a Kerberos key
distribution center (KDC) outside
of the account’s site.

SamNoGcLogonEnforceNTLMCheck DWORD 0 Configures SAM to not give site


affinity to NTLM logon requests
that are network logon requests.
This setting reduces the number
of accounts with site affinity by
excluding those that simply
accessed network resources (by
using NTLM). This option should
not be enabled if you use older
clients that must authenticate
from outside the branch to local
resources in the branch.
Methods of Refreshing the Cached Memberships
You can refresh cached memberships on a single domain controller.
For a one-time, immediate cache refresh:
Use Ldp.exe (Windows Support Tools) to modify the operational attribute

updateCachedMemberships on the rootDSE with a value of 1. Adding a value of 1 to this
attribute instructs the local domain controller to perform the update. If the site link schedule

82
allows replication at the time you modify the attribute, this update occurs right away. This
method is the preferred method for updating a single domain controller because it does not
require restarting the domain controller. For information about using Ldp to modify this attribute,
see the Note below.
-or-
Restart the domain controllers in the site to restart the cache refresh interval, which triggers a

cache refresh.
Note
Use the following procedure to modify the updateCachedMemberships operational attribute.

To perform this operation, the user needs the control access right "Refresh Group Cache for
Logons" on the NTDS Settings object for the domain controller. Default access is granted to
System, Domain Admins, and Enterprise Admins.
1. Start Ldp.exe and bind to the target domain controller where the cache reset is to be
performed. (Do not select Tree view in the View menu.) When first binding to a domain
controller with Ldp, the default location is rootDSE. You can view the attributes for rootDSE in
the details pane. However, operational attributes are not listed.
2. On the Browse menu, click Modify.
3. In the Modify dialog box, in the Edit Entry Attribute box, type
updateCachedMemberships. In the Values box, type 1. Be sure to leave the Dn box blank.
4. Click Enter. The command should appear in the entry list.
5. Click Run. If the operation was successful, Ldp will report “Modified” in the output.
Method of Clearing the Cached Memberships
You can clear all cached memberships on all domain controllers in a site. However, doing so can
affect performance. The need to clear the cached memberships might arise due to too many cached
accounts, causing inability to refresh all account caches during a single cache refresh. For example,
sites that have many transient accounts might exceed the 500-account refresh limit.
If you have more than 500 accounts cached and you want to clear them for all domain controllers in
the site, you can do so by editing the registry.
Note
If you must edit the registry, use extreme caution and be sure that you back it up first. Registry

information is provided here as a reference for use by only highly skilled directory service
administrators. Do not directly edit the registry unless, as in this case, no Group Policy setting or
other Windows tools can accomplish the task. Modifications to the registry are not validated by
the registry editor or by Windows before they are applied, and as a result, incorrect values can be
stored. Storage of incorrect values can result in unrecoverable errors in the system.
On one domain controller, you can set the Cached Membership Site Stickiness (minutes)
registry entry to 0 and then initiate a cache refresh by using the operational attribute method on
that domain controller, as described in “Methods of Refreshing the Cached Memberships” earlier in
this subject. The 0 value in the setting causes the cache to be purged—values in all three attributes
(msDS-Cached-Membership, msDS-Cached-Membership-Time-Stamp, and msDS-Site-
Affinity) are cleared. After the site stickiness attribute deletion has replicated within the site, you
can then initiate cache refresh on the other domain controllers in the site. Remember to return the
value in Membership Site Stickiness (minutes) to its default value of 180.
Diagnostic Logging Levels and Events
Diagnostic logging for Universal Group Membership Caching can be set in the registry entry
20 Group Caching under
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Diagnostics
Data type: REG_DWORD

83
Value range: 0-5
Default value: 0
Significant events are reported at logging level 2 with many additional events reported at logging
level 5. For troubleshooting, set the logging level to 5.
Sample Events at Logging Level 0
Event ID 1667: The group membership cache refresh task detected that the following site in which a
global catalog was found is not one of the cheapest sites, as indicated by the published site link
information.
Event ID 1668: The group membership cache refresh task did not locate a global catalog server in
the preferred site, but was able to find a global catalog server in the following available site.
Preferred site: <site name> Available site: <site name>
Event ID 1669: The group membership cache refresh task has reached the maximum number of
users for the local domain controller.
Event ID 1670: The group membership cache refresh task is behind schedule. Consider forcing a
group membership cache update.
Sample Events at Logging Level 2
Event ID 1776 internal event: The group membership cache task is starting.
Event ID 1777 internal event: The group membership cache task has finished. The completion
status was 0, and the exit Internal ID was ######.
Event ID 1779 internal event: The Global Catalog Domain Controller <dcname> in site <site
name>, domain <domain name> will be used to update the group memberships.
Event ID 1781 internal event: By examining the published connectivity information, the group
membership cache task has determined site <site name> is a site with a low network cost to
contact. The task will schedule itself based on the schedule of network connectivity to this site.
Event ID 1782 internal event: By examining the published connectivity information, the group
membership cache task cannot find an efficient site to obtain group membership information. The
task will run using the global catalog server that is closest, as determined by the Net Logon locator,
and will schedule itself based on a fixed period.
Event ID 1842 internal event: The following site link will be used to schedule the group membership
cache refresh task. Site link: <distinguished name of site link>
Sample Events at Logging Level 5
Event ID 1778 internal event: The group membership cache task will run again in xx minutes.
Event ID 1784 internal event: The group membership cache task determined that site
<distinguished name of site> does not have a global catalog server.
How the Cache is Populated at First Logon
By default, the caching attributes on the user and computer objects are not populated. The
following diagram shows how the domain controller builds the list of SIDs to be cached and where in
the process the caching attributes are populated during the user’s first logon in the site. This
example assumes that the user is in a site that has Universal Group Membership Caching enabled,
the domain of the client workstation is the same as the domain of the user, and the domain has a
functional level that allows universal groups.
Universal Group Membership Caching Process at First Logon

84
The following events occur at each step in the preceding diagram:
1. A user logs on in a site where Universal Group Membership Caching is enabled. The user is
authenticated by the domain controller as being the requesting user.
2. The domain controller checks the values of the three caching attributes of the user object.
3. Finding that the attributes are not populated, the domain controller checks its local directory
and retrieves the SID of the user (including SID history, if available) and the SIDs of all global
groups to which the user belongs.
4. The domain controller sends this list of SIDs to the global catalog server. The global catalog
server checks the universal group memberships of the user and all global groups in the list. The
global catalog server returns the list of combined universal group and global group SIDs to the
domain controller.
5. The user’s cache attributes are populated as follows:
1. The combined list of global group and universal group SIDs is recorded in the msDS-
Cached-Membership attribute.
2. The time is recorded in the msDS-Cached-Membership-Time-Stamp attribute (this
time indicates the last time the cache was updated; on the first logon, it also happens to
be the time the user logged on).
3. If SamNoGcLogonEnforceNTLMCheck or
SamNoGcLogonEnforceKerberosIpCheck, or both, are enabled on the domain
controller, the msDS-Site-Affinity attribute is ignored.
4. If the GUID for the local site exists in the msDS-Site-Affinity attribute and the settings
in step c. are not enabled, the timestamp value in msDS-Site-Affinity is evaluated as
follows: If the value indicates an age that is less than half the value in Cached
Membership Site Stickiness (minutes), the logon proceeds. If the timestamp
indicates an age that is greater than half the value in Cached Membership Site
Stickiness (minutes), or if the attribute is not populated, the site GUID and time are
written to the msDS-Site-Affinity attribute, and the logon proceeds.
6. The domain controller checks its local directory for any domain local groups to which the user
belongs and adds domain local group SIDs to its list of global group and universal group SIDs.
Note

85
The process for accomplishing Step 6 differs depending on whether the domain of the client

computer is the same as the domain of the user and, if not, whether the client computer is
joined to a domain that has a mixed domain mode or functional level, or a native domain
mode or functional level. For more information about how SIDs are retrieved and added to
access tokens, see “Access Tokens Technical Reference.”
7. The domain controller sends the entire list of SIDs to the client computer, where the LSA
retrieves SIDs of the user’s built-in group memberships and constructs the user’s access token.
Note
Global catalog servers in a site where caching is enabled do not populate the msDS-Cached-

Membership and msDS-Cached-Membership-Time-Stamp attributes of users they
authenticate. Because global catalog servers are already aware of universal group memberships
throughout the forest and global group memberships for the domain, there is no need for them to
use these attributes.
How the Cache is Used for Subsequent Logons
When Universal Group Membership Caching is enabled in the site, the following sequence occurs
during account logon:
1. The account is authenticated by the contacted domain controller.
2. The domain controller checks for the presence of values in the caching attributes of the
respective user or computer object. If the attribute values are present, the domain controller
updates the values as follows:
1. If the value in the msDS-Cached-Membership-Time-Stamp attribute indicates an age
that is less than the staleness interval (Cached Membership Staleness (minutes),
default seven days), the domain controller reads the group SIDs from the msDS-
Cached-Membership attribute and the logon proceeds.
2. If the value in msDS-Cached-Membership-Time-Stamp indicates an age of greater
than the staleness interval, the domain controller contacts a global catalog server to
request the universal group membership. If a global catalog server cannot be contacted,
the logon is denied.
3. If the value in the timestamp in msDS-Site-Affinity is equal to or older than 50 percent
of the site stickiness setting, the timestamp is updated with the current time.
3. The domain controller returns the group SIDs from the cache plus any domain local group SIDs
to the client computer and the logon proceeds.
Note
At no time during a successful logon does the local domain controller check with a global catalog

server to see if the account’s group membership has changed. Changes to an account’s group
membership are not reflected in the account’s access token until the local domain controller
performs a cache refresh. The default amount of time between cache refreshes is eight hours.
This interval could result in an inconsistent view of group membership if the account was
authenticated by a domain controller in a different site. This discrepancy might also confuse
administrators who are unfamiliar with how universal group membership caching works.
How the Cache is Refreshed
The cache refresh process occurs automatically on every domain controller that is running Windows
Server 2003 and has received replication of the msDS-Site-Affinity attribute update for a user or
computer object or has already cached group memberships. The refresh operation occurs differently
depending on whether a site is selected for the preferred refresh site.
Cache Refresh Process When a Preferred Refresh Site is Not Selected
When the refresh interval expires, the domain controller proceeds as follows:
1. Lists all the site links that connect the domain controller’s site to a site that hosts a global

86
catalog server in increasing order of cost values on the site link objects.
2. Selects the lowest-cost site link and schedules the refresh by using the site link schedule. If no
site link schedule is set, then replication is always available. Depending on the schedule, the
refresh proceeds as follows:

• If the schedule currently allows replication, the domain controller begins the refresh.
If the schedule does not currently allow replication, the domain controller schedules the

refresh to begin when the schedule allows replication.
Note
When the refresh is postponed according to the site link schedule, a random stagger in the range of
0-15 minutes is added to the computed start time. Schedule staggering has the effect of ensuring
that domain controllers begin their refresh at slightly different times, thereby improving load
balancing on the global catalog server.
1. When the schedule allows replication, begins the refresh by locating and binding to a global
catalog server in the next closest site.
2. Removes accounts that have a populated cache but no site affinity. Cached entries that do not
include a populated msDS-Site-Affinity value are purged at this time. A maximum of
64 entries are removed at a time. If more entries need to be removed, they are removed during
subsequent refreshes.
3. Removes any account whose site affinity matches the local site, but whose site affinity time
period has expired. In this case, the values in the three cache attributes are deleted and this
account no longer has a group membership cache on the domain controller.
4. Builds a list of accounts by querying the global catalog for all accounts that have GUIDs in their
msDS-Site-Affinity attribute that match the GUID of the domain controller’s site.
5. Updates cache attributes of the accounts in the list by querying the global catalog for each
account’s group membership, as follows:
Update the msDS-Cached-Membership attribute with the account’s group membership

SIDs.
• Update the msDS-Cached-Membership-Time-Stamp attribute with the time of refresh.
6. Repeats the process for each account until all accounts are updated or until the refresh limit of
500 accounts is reached. If the refresh limit is reached, the domain controller logs event
ID 1669 in the Directory Service event log, and the refresh stops.
7. Checks to ensure that the refresh task has not fallen behind in terms of the maximum period
allowed for an account’s cached membership list to be valid for logons. This step is
accomplished by locating the record with the oldest msDS-Cached-Membership-Time-Stamp
value and comparing the timestamp value to the staleness interval (seven days by default). If
the entry is more than seven days old, the domain controller logs event ID 1670, indicating that
the refresh task has fallen behind.
Note
When the domain controller encounters the refresh limit, it stops updating cache entries.

Because the order in which the updates occur cannot be predicted, there is no way to ensure
that the caches of the most recent accounts are updated. The staleness check in step 9
checks all cached entries, even those excluded due to exceeding the refresh limit. After about
one week, the non-updated cache entries will become stale and cause the falling behind error
to be reported in the event log.
Cache Refresh Process When A Preferred Refresh Site is Selected
When a site is selected to always be used for refreshing the group membership cache, the domain
controller does not need to order the site links according to cost, but simply contacts a global

87
catalog server in the specified site. However, if no global catalog server is available at the time the
refresh is attempted, the domain controller logs event ID 1782, indicating that a domain controller
could not be found in the preferred site, and uses the site link cost to locate a global catalog server
in the next closest site.

Inconsistent Access to Domain-Based Objects on Global Catalog Servers


When specifying Read or List permissions for domain data that is also replicated to the global
catalog, use global groups rather than domain local groups because the access token that is created
for the user by the global catalog server does not necessarily contain information about domain
local groups to which the user belongs.
When a user connects to a global catalog server, an access token is created for the user that is used
in subsequent access decisions on the server. If the user is a member of a domain other than the
domain of the global catalog server, the global catalog server contacts a domain controller in the
user’s domain to authenticate the user and retrieve authorization data. The domain controller
returns information about the user, including the SIDs of global groups in the user’s domain to
which the user belongs. The domain controller from the user's domain does not return domain local
group SIDs to the global catalog server.
Universal group membership is retrieved from the global catalog, and the global catalog server
looks to its own domain (which is not necessarily the domain of the user) for any domain local
groups to which the user belongs. Thus the access token for the user on the global catalog server
contains the global groups and universal groups to which the user belongs, as well as any domain
local groups to which the user belongs in the domain of the global catalog server.
The effect of a missing domain local group SID in the user’s access token is that the user’s access to
global catalog data is unpredictable. For example, if access to the homePhone attribute of a user
object is restricted by a domain local group in the user's domain, and the user is a member of that
group, the user is able to view that attribute in the global catalog when both of the following
conditions are true:

• The homePhone attribute is replicated to the global catalog.


The global catalog server to which the user connects does not host a writable copy of the user’s

domain.
Similarly, if the user is a member of a domain local group in a domain other than the domain hosted
by the global catalog server, and that group is granted read access to the homePhone attribute,
the user cannot view that attribute in the global catalog.

Global Catalog Searches


The location of an object in Active Directory is provided by the distinguished name of the object,
which includes the full path to a replica of the object, culminating in the directory partition that
holds the object. However, the user or application does not always have the distinguished name of
the target object, or even the domain of the object. To locate objects without knowing the domain,
the most commonly used attributes of the object are replicated to the global catalog. By using these
object attributes and directing the search to the global catalog, requesters can find objects of
interest without having to know their directory location. For example, to locate a printer, you can
search according to the building of the printer. To locate a person, you can provide the name of the
person. To locate all people who are managed by someone, you can provide the manager’s name.
LDAP Search Ports
Active Directory uses the Lightweight Directory Access Protocol (LDAP) as its access protocol. LDAP
search requests can be sent and received by Active Directory on port 389 (the default LDAP access

88
port) and port 3268 (the default global catalog port). LDAP traffic that uses the Secure Sockets
Layer (SSL) authentication protocol accesses ports 686 and 3269, respectively. In this discussion,
search behavior that applies to ports 389 and 3268 also apply to the respective behavior of LDAP
queries over ports 686 and 3269.
When a search request is sent to port 389, the search is conducted on a single domain directory
partition. If the object is not found in that domain or the schema or configuration directory
partitions, the domain controller refers the request to a domain controller in the domain that is
indicated in the distinguished name of the object.
When a search request is sent to port 3268, the search includes all directory partitions in the forest
— that is, the search is processed by a global catalog server. If the request specifies attributes that
are part of the PAS, the global catalog can return results for objects in any domain without
generating a referral to a domain controller in a different domain. Only global catalog servers
receive LDAP requests through port 3268. Certain LDAP client applications are programmed to use
port 3268. Even if the data that satisfies a search request is available on a regular domain
controller, if the application launching the search uses port 3268, the search necessarily goes to a
global catalog server.
Search Criteria That Target the Global Catalog
Searches are directed to a global catalog server under the following conditions:

• You specify port 3268 or 3269 in an LDAP search tool.


You select Entire Directory in a search-scope list in an Active Directory snap-in or search tool,

such as Active Directory Users and Computers or the Search command on the Start menu.
You write the distinguished name as an attribute value, where the distinguished name represents

a nonlocal object. For example, if you are adding a member to a group and the member that you
are adding is from a different domain, a global catalog server verifies that the user object
represented by the distinguished name exists and obtains its GUID.
Characteristics of a Global Catalog Search
The following characteristics differentiate a global catalog search from a standard LDAP search:
Global catalog queries are directed to port 3268, which explicitly indicates that global catalog

semantics are required. By default, ordinary LDAP searches are received through port 389. If you
bind to port 389, even if you bind to a global catalog server, your search includes a single domain
directory partition. If you bind to port 3268, your search includes all directory partitions in the
forest. If the server you attempt to bind to over port 3268 is not a global catalog server, the
server refuses the bind.
Global catalog searches can specify a non-instantiated search base, indicated as "com" or " "

(blank search base).
Global catalog searches cross directory partition boundaries. The extent of the standard LDAP

search is the directory partition.
Global catalog searches do not return subordinate referrals. If you use port 3268 to request an

attribute that is not in the global catalog, you do not receive a referral to it. Subordinate referrals
are an LDAP response; when you query over port 3268, you receive global catalog responses,
which are based solely on the contents of the global catalog. If you query the same server by
using port 389, you receive referrals for objects that are in the forest but whose attributes are
not referenced in the global catalog.
Note
A referral to a directory that is external to Active Directory can be returned by the global

catalog if a base-level search for an external directory is submitted and if the distinguished

89
name of the external directory uses the domain component (dc=) naming attribute. This
referral is returned according to the ability of Active Directory to construct a Domain Name
System (DNS) name from the domain components of the distinguished name and is not based
on the presence of any cross-reference object. The same referral is returned by using the LDAP
port; it is not specific to the global catalog.
Because the member attribute is not replicated to the global catalog for all group types, and
because the memberOf attribute derives its value by referencing the member attribute (called
back links and forward links, respectively), the results of searches for members of groups and
groups of which a member belongs can vary, depending on whether you search the global catalog
(port 3268) or the domain (port 389), the kind of groups that the user belongs to (global groups or
domain local groups), and whether the user belongs to universal groups outside the local domain.
For more information about global catalog searches and the implications of searching on back links
and forward links, see “How Active Directory Searches Work.”
The Infrastructure Master and Phantom Records
An attribute that has a distinguished name as a value references (points to) the named object.
When the referenced object does not actually exist in the local directory database because it is in a
different domain, a placeholder record called a phantom is created in that database as the object
reference. Because there is a reference to it, the referenced object must exist in some form, either
as the full object (if the domain controller stores the respective domain directory partition) or as an
object reference (when the domain controller does not store that domain).
The infrastructure master is a single domain controller in each domain that tracks name changes of
referenced objects and updates the references on the referencing object. When a referenced object
is moved to a different domain (which effectively renames the object), the infrastructure master
updates the distinguished name of the phantom. The infrastructure master finds phantom records
by using a database index that is created only on domain controllers that hold the infrastructure
operations master role. When the reference count of the phantom falls to zero (no objects are
referencing the object that the phantom represents), garbage collection on each domain controller
removes the phantom.
Because objects can reference objects in different domains, the infrastructure operations master
role is not compatible with global catalog server status if more than one domain is in the forest. If a
global catalog server holds the infrastructure operations master role, phantom records are never
created because the referenced object is always located in the directory database on the global
catalog server.
For more information about the infrastructure operations master role, see “How Operations Masters
Work.”
Exchange Address Book Lookups
The Exchange Server directory service for Exchange 2000 Server and Exchange Server 2003 is
integrated with Active Directory. When mail users want to find a person within their organization,
they usually search the global address book (GAL), which is an aggregation of all messaging
recipients in the enterprise, including mailbox-enabled users, mail-enabled users, groups, and
contacts. The GAL is a virtual linked list of pointers to the mail recipient objects that comprise it.
Mail recipients can be user accounts (both enabled and disabled accounts), contacts, distribution
lists, security groups, and folders. The GAL is automatically populated by a service on the Exchange
Server, and user’s can create customized address lists. Exchange 2000 Server and Exchange
Server 2003 use the global catalog to generate the GAL.

90
Every Outlook client is configured with the name of an Exchange server. Exchange servers use
Active Directory and DNS to locate a global catalog server. When an Outlook client user opens the
Address Book, or when a user composes a message and types a name or an address in the To:
field, the Outlook client uses the global catalog server that is specified by its Exchange server to
search the contents of the GAL or other address lists.
For more information about how Outlook clients locate address information in the global catalog,
see “How Active Directory Searches Work.”

Global Catalog Server Creation and Advertisement


You can designate a domain controller as a global catalog server by simply selecting a check box in
the properties of the NTDS Settings object, located beneath each server object in Active Directory
Sites and Services. However, for search clients and other domain controllers in the forest to locate
it, the global catalog server must register itself in DNS.
The Net Logon service on a domain controller registers service (SRV) resource records in DNS that
identify the domain controller so that it can be located. In the case of a global catalog server,
additional SRV resource records are registered to identify the server as a global catalog server.
These SRV resource records contain the server name, the forest name, and the site name for the
global catalog server. DNS queries for these records return all global catalog servers in the
requested site. When a client requests a global catalog server by launching an LDAP search over
port 3268, the domain controller Locator on the client queries DNS for a domain controller that
hosts the global catalog. For more information about domain controllers and global catalog servers
are located, see “How DNS Support for Active Directory Works.”
By default, before a domain controller advertises itself as a global catalog server in DNS, the global
catalog contents must be replicated to the server. This process involves replication of a partial,
read-only replica of every domain in the forest except for the domain for which the new global
catalog server is authoritative. The duration of this process depends on how many domains the
forest contains, the size of the domains, and the relative locations of source and destination domain
controllers. If multiple domains are in the forest and if source domain controllers are located only in
distant sites, the process takes longer than if all domains are in the same site or in only a few sites.
When replication must occur between sites to create the global catalog, replication occurs according
to the site link schedule.
When creating a new global catalog server, the process can be delayed by several conditions,
including the following:

• The KCC could not reach a source domain controller from which to replicate a directory partition.

• Replication cannot begin until the scheduled time.


Replication of the directory partition is in progress but has not yet completed. This delay might

occur if the directory partition is very large. In addition, the replication priority queue prioritizes
addition of new directory partitions at a lower priority than incremental replication of existing
partitions.
The source domain controller for a directory partition has gone offline or is unavailable due to

network problems.
These conditions are logged in the Directory Service event log when the logging level is set to 0 (the
default setting) in the Global Catalog entry under
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Diagnostics\. If you
want to receive more information, you can increase the logging level by editing this entry.
Note

91
If you must edit the registry, use extreme caution and be sure that you back it up first. Registry

information is provided here as a reference for use by only highly skilled directory service
administrators. Do not directly edit the registry unless, as in this case, no Group Policy setting or
other Windows tools can accomplish the task. Modifications to the registry are not validated by
the registry editor or by Windows before they are applied, and as a result, incorrect values can be
stored. Storage of incorrect values can result in unrecoverable errors in the system.
Requirements for Global Catalog Readiness
By default, a global catalog server is not considered “ready” (the server advertises itself in DNS as a
global catalog server) until all read-only directory partitions have been fully replicated to the new
global catalog server. The Global Catalog Partition Occupancy registry entry under
HKEY_Local_Machine\System\CurrentControlSet\Services\NTDS\Parameters determines
the requirements for how many read-only directory partitions must be present on a domain
controller for it to be considered a global catalog server, from no partitions (0) to all partitions (6).
The default occupancy value for Windows Server 2003 domain controllers requires that all read-only
directory partitions be replicated to the global catalog server before the Net Logon service registers
SRV resource records in DNS. For most conditions, this default provides the best option for ensuring
that a global catalog server provides a consistent view of the directory. In less common
circumstances, however, it might be useful to make the global catalog server available with an
incomplete set of partial domain directory partitions—for example, when delay of replication of a
domain that is not required by users is jeopardizing their ability to log on.
The Global Catalog Partition Occupancy entry can have the values shown in the following table.
Global Catalog Partition Occupancy Level Values

Value Description

0 No occupancy requirement. Removing the occupancy level requirement might be useful in a


scenario where domain controllers are being staged for deployment but are not yet in
production.

1 At least one read-only directory partition in the site has been added by the KCC. This level, as
well as level 3 and level 5, provide the ability to distinguish between a source for the directory
partition being reachable (at least one object has been returned) and the entire directory
partition having been replicated (as in levels 2, 4, and 6).
When the KCC can reach the first object, it can create a replica link, which is the agreement
between the source and destination domain controllers to replicate to the destination. If the
KCC cannot reach a source, the KCC logs event ID 1558 in the Directory Service log, which
indicates the distinguished name of the directory partition that has not been fully
synchronized. In this case, the KCC continues to try to replicate the partition each time it runs
(every 15 minutes by default).
When the KCC succeeds in creating the replica link, it passes responsibility for retrying and
completing the synchronization to the replication engine. The KCC then stops logging events,
after which the replication status can be checked by using the repadmin/showrepl
command.

2 At least one read-only directory partition in the site has been fully synchronized.

3 All read-only directory partitions in the site have been added by the KCC (at least one has

92
Value Description

been fully synchronized). In this case, the KCC has been able to contact one source for every
directory partition in the site. This level is useful when you want to advertise a global catalog
server as soon as possible with a high likelihood of success.

4 All read-only directory partitions in the site have been fully synchronized. With this setting, if
a source for any directory partition is not available, DNS registrations cannot occur. On
domain controllers that are running Windows 2000 Server with Service Pack 1 (SP1) or
Windows 2000 Server with Service Pack 2 (SP2), this occupancy level is the default
requirement before the global catalog server is advertised in DNS.

5 All read-only directory partitions in the forest have been added by the KCC (at least one has
been fully synchronized).

6 All read-only directory partitions in the forest have been fully synchronized. On domain
controllers that are running Windows Server 2003 or Windows 2000 Server with SP3 or later,
this occupancy level is the default requirement before the global catalog server is advertised
in DNS. This setting ensures the highest level of consistency.
Event ID 1578 reports the level that is required and the level that the domain controller has
achieved.
Advertising a Global Catalog Server Prior to Full Synchronization
By default, a domain controller checks every 30 minutes to see whether it has received all of the
read-only directory partitions that are required to be present before the server advertises itself in
DNS as a global catalog server. Event ID 1110 indicates that the promotion is being delayed
because the required directory partitions have not all been synchronized.
This delay is controlled by the Global Catalog Delay Advertisement (sec) registry entry under
HKEY_Local_Machine\System\CurrentControlSet\Services\NTDS\Parameters\. If you set
a value for Global Catalog Delay Advertisement (sec), it overrides the requirements set in
Global Catalog Partition Occupancy and allows global catalog advertisement without requiring
full synchronization of all read-only directory partitions.
When conditions preclude the successful synchronization of the new global catalog server, you can
force advertisement of the global catalog server and then remove the global catalog from the
server. Until the global catalog server is successfully advertised, you are unable to remove it.
Replication Process for Global Catalog Creation
When you designate a domain controller to be a global catalog server, the Knowledge Consistency
Checker (KCC) on the domain controller runs immediately and updates the replication topology.
When the KCC runs, it checks to see whether the Global Catalog option is selected for any domain
controllers, and creates the replication topology accordingly. The KCC configures the newly selected
global catalog server to be the destination server for a read-only replica of every domain directory
partition in the forest except for the writable domain directory partition that the server already
holds. The KCC on the global catalog server must be able to reach a server that will be the source of
each read-only directory partition.
When the KCC locates an available source domain controller, it creates an inbound connection on
the new global catalog server and replication of that read-only partition takes place. If the source is
within the site, replication begins immediately. If the source is in a different site, replication begins

93
when it is next scheduled. Replication of all objects in the partial directory partition must complete
successfully before the directory partition is considered to be present on the global catalog server.
Successful Completion of Global Catalog Creation
When all directory partitions are present, the domain controller sets its rootDSE
isGlobalCatalogReady attribute to TRUE and the Net Logon service on the domain controller
registers SRV resource records that specifically advertise the global catalog server in DNS. At this
point, the global catalog is considered to be available, and event ID 1119 is logged in the Directory
Service event log.

Global Catalog Replication


Although read-only directory partitions on global catalog servers cannot be updated directly by
administrative LDAP writes, they are updated through Active Directory replication when changes
occur to their respective writable replicas.
The following diagram represents the Active Directory database on a global catalog server in the
corp.contoso.com forest root domain. A global catalog server has a single directory database.
However, to represent the different logical directory partitions in the forest, the diagram shows
database divided into segments. The top three segments represent directory partitions that are
writable replicas for the domain controller (the domain, configuration, and schema directory
partitions). The bottom three segments represent directory partitions that are read-only replicas of
the other domains in the forest.
Writable and read-only replicas in the Active Directory database on a global catalog
server

The source domain controller for replication of a given directory partition to a global catalog server
can be either a non-global catalog domain controller or another global catalog server. In the
following diagram, each directory partition on the global catalog server is being updated by a non-
global catalog domain controller. The writable replicas on the global catalog server are updated by a
domain controller that is authoritative for the same domain, Corp.contoso.com. The replication for
the Corp.contoso.com domain and the configuration and schema directory partitions is two-way
because the replicas are all writable.
Each of the read-only replicas is updated by a source domain controller that is authoritative for the
respective directory partition. The replication is one-way because read-only replicas never update
writable replicas.
Direction of directory partition replica updates between a global catalog server and other
domain controllers

94
Replication Between Global Catalog Servers
As is true for all domain controllers, a global catalog server uses a single topology to replicate the
schema and configuration directory partitions, and it uses a separate topology, if needed, for each
domain directory partition. However, when a two-way connection exists between the servers, either
for replication of the schema and configuration directory partitions or for replication in opposite
directions of the two writable domain directory partitions, all replicas on each global catalog server
use the same connection to update their common replicas when changes are available.
The diagram below shows the directions of replication between directory partitions on two global
catalog servers that are in different sites and are authoritative for different domains. The writable
replicas of soam.corp.contoso.com and corp.contoso.com update the respective read-only replicas in
one direction only (a writable replica is never updated by a read-only replica). Because neither
domain controller is authoritative for the noam.corp.contoso.com and eur.corp.contoso.com domain
replicas, the global catalog servers can be sources for replication of these partial read-only replicas.
This replication is shown as two-way because a two-way connection already exists and these
replicas are each capable of updating the other.
Direction of directory partition replica updates between two global catalog servers in
different domains

In the preceding diagram, the read-only replicas can also be updated from other domain controllers.
In a forest that has a forest functional level of Windows Server 2003 or Windows Server 2003
interim, the intersite KCC algorithm avoids creating redundant connection objects by implementing
one-way replication where possible. For example, if the schema and configuration writable replicas
and the Corp and Eur read-only domain replicas on GC1 are all updated by a domain controller
other than GC2, replication of the Corp and Eur read-only replicas from GC1 to GC2 occurs in one
direction if it occurs. In this case, GC1 might not generate a connection object for replication from
GC2.
Replication of Changes to the Global Catalog Partial Attribute Set

95
The default set of attributes that are replicated to the global catalog are identified by the schema.
These attributes are referred to as the partial attribute set (PAS) because they provide a replica of
every object in the directory, but the object includes only those attributes that are most likely to be
used for searches. If you want to add an attribute to the partial attribute set, you can mark the
attribute by using the Active Directory Schema snap-in to edit the
isMemberOfPartialAttributeSet value on the respective attributeSchema object. If the value is
set to TRUE, the attribute is replicated to the global catalog.
When a schema change affects the set of attributes that are marked for inclusion in the global
catalog (an attribute is added to the partial attribute set), replication of the change occurs
differently on global catalog servers running Windows 2000 Server and those running
Windows Server 2003. Depending on the version of Windows that is running on the replication
partners, an update to the PAS can cause either a full synchronization of all directory partitions in
the global catalog or replication of only the updated attributes, as follows:
Updates only: When both servers are running Windows Server 2003, only the changed

attributes are replicated to global catalog servers running Windows Server 2003. There is no
replication impact.
Full synchronization: When both servers are running Windows 2000, the global catalog server

initiates a full synchronization of all partial, read-only domain directory partition replicas to
become up-to-date with the extended replicas on other domain controllers. If the partial directory
partition replica can be synchronized over an RPC connection, the domain controller attempts a
full synchronization over the RPC connection before it uses a configured SMTP connection. If full
synchronization is completed, the up-to-dateness vector that it creates ensures that later
synchronization requests on other connections do not include data that has been received during
the initial full synchronization.
Full synchronization of a global catalog server causes increased network traffic while it is in
progress and can take from several minutes to hours, depending on the size of the directory.
Although interruption of service does not occur, this replication causes higher bandwidth
consumption than is required for usual day-to-day replication. The resulting bandwidth
consumption for each global catalog server is equivalent to that caused by adding the global
catalog to a domain controller. Whenever the isMemberOfPartialAttributeSet value of a new
attributeSchema object in the schema directory partition is set to TRUE, event ID 1575 occurs,
stating full synchronization is required.
Full synchronization: When one global catalog server is running Windows 2000 Server and the

other is running Windows Server 2003 and a global catalog server running Windows Server 2003
replicates the change to a global catalog server running Windows 2000 Server, the server
running Windows Server 2003 reverts to Windows 2000 Server behavior, as described above.
Note
The Windows Server 2003 schema contains new attributes that are marked for inclusion in the

partial attribute set. Replication of these new attributes to global catalog servers is triggered by
raising the forest functional level to Windows Server 2003. Therefore, upgrading the schema has
no impact on Windows 2000–based global catalog servers because the global catalog is updated
only when all domain controllers are running Windows Server 2003 (the requirement for raising
the forest functional level to Windows Server 2003). For more information about functional levels,
see “How Active Directory Functional Levels Work.”
Removing an attribute from the PAS does not involve replication of a deletion, but is handled locally.
If you set the isMemberOfPartialAttributeSet value to FALSE in the schema, the attribute is

96
removed from the directory of each global catalog server immediately after receiving the schema
update. This behavior is the same on global catalog servers running Windows Server 2003 and
Windows 2000 Server.

Removing the Global Catalog from a Domain Controller


When you add the global catalog to a domain controller, a partial replica of each domain directory
partition, other than the domain for which the domain controller is authoritative, is copied to the
domain controller as described in “Replication Process for Global Catalog Creation” earlier in this
subject. This replication occurs immediately within a site or at the next scheduled replication
between sites. However, when you remove the global catalog from a domain controller, the KCC
begins removing the read-only replicas one at a time by means of an asynchronous process that
removes objects gradually over time until no read-only objects remain.
The Windows 2000 KCC removes approximately 500 objects per attempt. Each time the KCC runs
(every 15 minutes by default), it attempts the removal of the read-only replica until there are no
remaining objects. At an estimated 2000 objects per hour, complete removal of the global catalog
from the domain controller can take from several hours to days, depending on the size of the
directory.
The Windows Server 2003 KCC initially instructs the directory to remove each replica, but once the
instruction is received, the directory itself keeps track of the progress of replica removal and
reschedules the work accordingly. Rather than removing a fixed number of objects per removal
attempt, Active Directory continues removing objects until either the replica is gone or a higher
priority replication operation is in the queue. Read-only replica removal receives the lowest possible
priority, meaning that any other replication work will interrupt it. Thus, removal work is pre-empted
for the other work and then resumed later. On domain controllers running Windows Server 2003,
event log messages record when removal of a partial replica is starting, being resumed, and
finishing.
KCC events that are logged in the Directory Service log during global catalog removal include:

• Event ID 1744: Local domain controller is no longer a global catalog server.


Event ID 1659: Removal of a directory partition from the local database has been resumed,

including the approximate number of objects remaining to be removed and number of link values
of attributes remaining to be removed.
• Event ID 1660: A specified directory partition has been removed from the local domain controller.
For a normally-to-lightly loaded system, read-only replica removal occurs as fast as possible in the
background. On a server that is very busy with replication (for example, a hub domain controller),
estimating the time required for global catalog removal is difficult.
As soon as you set the domain controller to not be a global catalog server by clearing the Global
Catalog check box on the NTDS Settings object properties page, Net Logon deregisters the SRV
resource records that specifically advertise the global catalog server in DNS. Therefore, although the
read-only domain replicas might take hours or days to remove, the domain controller immediately
stops advertising itself as a global catalog server in DNS and immediately stops accepting LDAP
requests over port 3268.
Note
On a domain controller running Windows 2000 Server, if you check the Global Catalog box

again before a partial replica has been completely removed, the KCC begins the process of
replicating in the read-only replicas as follows: If a given replica is completely gone, the KCC

97
adds the replica back. If the replica is still in the process of being removed, the KCC does not add
it again until the initial removal is complete. Thus, during the removal period, there can
conceivably be a mix of new replicas from the most recent global catalog instance, and some old
replicas in the process of being removed from the previous instance. This condition is temporary
and of no consequence to users because the domain controller is not being advertised as a global
catalog server.
Top of page
Network Ports Used by the Global Catalog
The following ports are used by global catalog servers:
Port Assignments for Global Catalog Servers

Service Name UDP TCP

LDAP 3268 (global catalog)

LDAP 3269 (global catalog Secure Sockets Layer [SSL])

LDAP 389 389

LDAP 636 (SSL)

RPC/REPL 135 (endpoint mapper)

Kerberos 88 88

DNS 53 53

SMB over IP 445 445

How Active Directory Replication Topology Works


Updated: March 28, 2003
How Active Directory Replication Topology Works
In this section

• Active Directory KCC Architecture and Processes

• Replication Topology Physical Structure


Performance Limits for Replication Topology

Generation
• Goals of Replication Topology

• Topology-Related Objects in Active Directory

• Replication Transports

• Replication Between Sites

• KCC and Topology Generation

• Network Ports Used by Replication Topology

• Related Information
Active Directory implements a replication topology that takes advantage of the network speeds
within sites, which are ideally configured to be equivalent to local area network (LAN) connectivity

98
(network speed of 10 megabits per second [Mbps] or higher). The replication topology also
minimizes the use of potentially slow or expensive wide area network (WAN) links between sites.
When you create a site object in Active Directory, you associate one or more Internet Protocol (IP)
subnets with the site. Each domain controller in a forest is associated with an Active Directory site.
A client workstation is associated with a site according to its IP address; that is, each IP address
maps to one subnet, which in turn maps to one site.
Active Directory uses sites to:

• Optimize replication for speed and bandwidth consumption between domain controllers.

• Locate the closest domain controller for client logon, services, and directory searches.
Direct a Distributed File System (DFS) client to the server that is hosting the requested data

within the site.
Replicate the system volume (SYSVOL), a collection of folders in the file system that exists on

each domain controller in a domain and is required for implementation of Group Policy.
The ideal environment for replication topology generation is a forest that has a forest functional
level of Windows Server 2003. In this case, replication topology generation is faster and can
accommodate more sites and domains than occurs when the forest has a forest functional level of
Windows 2000. When at least one domain controller in each site is running Windows Server 2003,
more domain controllers in each site can be used to replicate changes between sites than when all
domain controllers are running Windows 2000 Server.
In addition, replication topology generation requires the following conditions:
A Domain Name System (DNS) infrastructure that manages the name resolution for domain

controllers in the forest. Active Directory–integrated DNS is assumed, wherein DNS zone data is
stored in Active Directory and is replicated to all domain controllers that are DNS servers.
All physical locations that are represented as site objects in Active Directory have LAN

connectivity.
IP connectivity is available between each site and all sites in the same forest that host operations

master roles.
Domain controllers meet the hardware requirements for Microsoft Windows Server 2003,

Standard Edition; Windows Server 2003, Enterprise Edition; and Windows Server 2003,
Datacenter Edition.
The appropriate number of domain controllers is deployed for each domain that is represented in

each site.
This section covers the replication components that create the replication topology and how they
work together, plus the mechanisms and rationale for routing replication traffic between domain
controllers in the same site and in different sites.
Top of page
Active Directory KCC Architecture and Processes
The replication topology is generated by the Knowledge Consistency Checker (KCC), a replication
component that runs as an application on every domain controller and communicates through the
distributed Active Directory database. The KCC functions locally by reading, creating, and deleting
Active Directory data. Specifically, the KCC reads configuration data and reads and writes
connection objects. The KCC also writes local, nonreplicated attribute values that indicate the
replication partners from which to request replication.
For most of its operation, the KCC that runs on one domain controller does not communicate
directly with the KCC on any other domain controller. Rather, all KCCs use the knowledge of the
common, global data that is stored in the configuration directory partition as input to the topology
generation algorithm to converge on the same view of the replication topology.

99
Each KCC uses its in-memory view of the topology to create inbound connections locally,
manifesting only those results that apply to itself. The KCC communicates with other KCCs only to
make a remote procedure call (RPC) request for replication error information. The KCC uses the
error information to identify gaps in the replication topology. A request for replication error
information occurs only between domain controllers in the same site.
Note
The KCC uses only RPC to communicate with the directory service. The KCC does not use

Lightweight Directory Access Protocol (LDAP).
One domain controller in each site is selected as the Intersite Topology Generator (ISTG). To enable
replication across site links, the ISTG automatically designates one or more servers to perform site-
to-site replication. These servers are called bridgehead servers. A bridgehead is a point where a
connection leaves or enters a site.
The ISTG creates a view of the replication topology for all sites, including existing connection
objects between all domain controllers that are acting as bridgehead servers. The ISTG then creates
inbound connection objects for servers in its site that it determines will act as bridgehead servers
and for which connection objects do not already exist. Thus, the scope of operation for the KCC is
the local server only, and the scope of operation for the ISTG is a single site.
Each KCC has the following global knowledge about objects in the forest, which it gets by reading
objects in the Sites container of the configuration directory partition and which it uses to generate a
view of the replication topology:

• Sites

• Servers

• Site affiliation of each server

• Global catalog servers


Directory partitions stored by each

server
• Site links

• Site link bridges


Detailed information about these configuration components and their functionality is provided later
in this section.
The following diagram shows the KCC architecture on servers in the same forest in two sites.
KCC Architecture and Processes

100
The architecture and process components in the preceding diagram are described in the following
table.
KCC Architecture and Process Components

Component Description

Knowledge The application running on each domain controller that communicates directly
Consistency with the Ntdsa.dll to read and write replication objects.
Checker (KCC)

Directory System The directory service component that runs as Ntdsa.dll on each domain
Agent (DSA) controller, providing the interfaces through which services and processes such as
the KCC gain access to the directory database.

Extensible Storage The directory service component that runs as Esent.dll. ESE manages the tables
Engine (ESE) of records, each with one or more columns. The tables of records comprise the
directory database.

Remote procedure The Directory Replication Service (Drsuapi) RPC protocol, used to communicate
call (RPC) replication status and topology to a domain controller. The KCC also uses this

101
Component Description

protocol to communicate with other KCCs to request error information when


building the replication topology.

Intersite Topology The single KCC in a site that manages intersite connection objects for the site.
Generator (ISTG)
The four servers in the preceding diagram create identical views of the servers in their site and
generate connection objects on the basis of the current state of Active Directory data in the
configuration directory partition. In addition to creating its view of the servers in its respective site,
the KCC that operates as the ISTG in each site also creates a view of all servers in all sites in the
forest. From this view, the ISTG determines the connections to create on the bridgehead servers in
its own site.
Note
A connection requires two endpoints: one for the destination domain controller and one for the

source domain controller. Domain controllers creating an intrasite topology always use
themselves as the destination end point and must consider only the endpoint for the source
domain controller. The ISTG, however, must identify both endpoints in order to create connection
objects between two other servers.
Thus, the KCC creates two types of topologies: intrasite and intersite. Within a site, the KCC creates
a ring topology by using all servers in the site. To create the intersite topology, the ISTG in each
site uses a view of all bridgehead servers in all sites in the forest. The following diagram shows a
high-level generalization of the view that the KCC sees of an intrasite ring topology and the view
that the ISTG sees of the intersite topology. Lines between domain controllers within a site
represent inbound and outbound connections between the servers. The lines between sites
represent configured site links. Bridgehead servers are represented as BH.
KCC and ISTG Views of Intrasite and Intersite Topology

Top of page
Replication Topology Physical Structure

102
The Active Directory replication topology can use many different components. Some components
are required and others are not required but are available for optimization. The following diagram
illustrates most replication topology components and their place in a sample Active Directory
multisite and multidomain forest. The depiction of the intersite topology that uses multiple
bridgehead servers for each domain assumes that at least one domain controller in each site is
running Windows Server 2003. All components of this diagram and their interactions are explained
in detail later in this section.
Replication Topology Physical Structure

In the preceding diagram, all servers are domain controllers. They independently use global
knowledge of configuration data to generate one-way, inbound connection objects. The KCCs in a
site collectively create an intrasite topology for all domain controllers in the site. The ISTGs from all
sites collectively create an intersite topology. Within sites, one-way arrows indicate the inbound
connections by which each domain controller replicates changes from its partner in the ring. For

103
intersite replication, one-way arrows represent inbound connections that are created by the ISTG of
each site from bridgehead servers (BH) for the same domain (or from a global catalog server [GC]
acting as a bridgehead if the domain is not present in the site) in other sites that share a site link.
Domains are indicated as D1, D2, D3, and D4.
Each site in the diagram represents a physical LAN in the network, and each LAN is represented as
a site object in Active Directory. Heavy solid lines between sites indicate WAN links over which two-
way replication can occur, and each WAN link is represented in Active Directory as a site link object.
Site link objects allow connections to be created between bridgehead servers in each site that is
connected by the site link.
Not shown in the diagram is that where TCP/IP WAN links are available, replication between sites
uses the RPC replication transport. RPC is always used within sites. The site link between Site A and
Site D uses the SMTP protocol for the replication transport to replicate the configuration and schema
directory partitions and global catalog partial, read-only directory partitions. Although the SMTP
transport cannot be used to replicate writable domain directory partitions, this transport is required
because a TCP/IP connection is not available between Site A and Site D. This configuration is
acceptable for replication because Site D does not host domain controllers for any domains that
must be replicated over the site link A-D.
By default, site links A-B and A-C are transitive (bridged), which means that replication of domain
D2 is possible between Site B and Site C, although no site link connects the two sites. The cost
values on site links A-B and A-C are site link settings that determine the routing preference for
replication, which is based on the aggregated cost of available site links. The cost of a direct
connection between Site C and Site B is the sum of costs on site links A-B and A-C. For this reason,
replication between Site B and Site C is automatically routed through Site A to avoid the more
expensive, transitive route. Connections are created between Site B and Site C only if replication
through Site A becomes impossible due to network or bridgehead server conditions.
Top of page
Performance Limits for Replication Topology Generation
Active Directory topology generation performance is limited primarily by the memory on the domain
controller. KCC performance degrades at the physical memory limit. In most deployments, topology
size will be limited by the amount of domain controller memory rather than CPU utilization required
by the KCC.
Scaling of sites and domains is improved in Windows Server 2003 by improving the algorithm that
the KCC uses to generate the intersite replication topology. Because all domain controllers must use
the same algorithm to arrive at a consistent view of the replication topology, the improved
algorithm has a forest functional level requirement of Windows Server 2003 or Windows
Server 2003 interim.
KCC scalability was tested on domain controllers with 1.8 GHz processor speed, 512 megabytes
(MB) RAM, and small computer system interface (SCSI) disks. KCC performance results at the
Windows Server 2003 forest functional level are described in the following table. The times shown
are for the KCC to run where all new connections are needed (maximum) and where no new
connections are needed (minimum). Because most organizations add domain controllers in
increments, the minimum generation times shown are closest to the actual runtimes that can be
expected in deployments of comparable sizes. The CPU and memory usage values for the Local
Security Authority (LSA) process (Lsass.exe) indicate the more significant impact of memory versus
percent of CPU usage when the KCC runs.

104
Note
Active Directory runs as part of the LSA, which manages authentication packages and

authenticates users and services.
Minimum and Maximum KCC Generation Times for Domain-Site Combinations

Domains Sites Connections KCC Generation Time Lsass.exe Memory Lsass.exe CPU
(seconds) Usage (MB) Usage (%)

1 500 Maximum 43 100 39

Minimum 1 100 29

1,000 Maximum 49 149 43

Minimum 2 149 28

3,000 Maximum 69 236 46

Minimum 2 236 63

5 500 Maximum 70 125 29

Minimum 2 126 71

1,000 Maximum 77 237 28

Minimum 3 237 78

2,000 Maximum 78 325 43

Minimum 5 325 77

3,000 Maximum 85 449 52

Minimum 6 449 75

4,000 Maximum 555 624 46

Minimum 34 624 69

20 1,000 Maximum 48 423 65

Minimum 5 423 81

40 1,000 Maximum 93 799 56

Minimum 12 799 96

2,000 Minimum 38 874 71


These numbers cannot be used as the sole guidelines for forest and domain design. Other
limitations might affect performance and scalability. A limitation of note is that when FRS is
deployed, a limit of 1,200 domain controllers per domain is recommended to ensure reliable
recovery of SYSVOL.
For more information about FRS limitations, see “FRS Technical Reference.” For more information
about the functional level requirements for the intersite topology generation algorithm, see
“Automated Intersite Topology Generation” later in this section.
Top of page
Goals of Replication Topology
The KCC generates a replication topology that achieves the following goals:
Connect every directory partition replica that must be

105
replicated.
• Control replication latency and cost.

• Route replication between sites.

• Effect client affinity.


By default, the replication topology is managed automatically and optimizes existing connections.
However, manual connections created by an administrator are not modified or optimized.

Connect Directory Partition Replicas


The total replication topology is actually composed of several underlying topologies, one for each
directory partition. In the case of the schema and configuration directory partitions, a single
topology is created. The underlying topologies are merged to form the minimum number of
connections that are required to replicate each directory partition between all domain controllers
that store replicas. Where the connections for directory partitions are identical between domain
controllers — for example, two domain controllers store the same domain directory partition — a
single connection can be used for replication of updates to the domain, schema, and configuration
directory partitions.
A separate replication topology is also created for application directory partitions. However, in the
same manner as schema and configuration directory partitions, application directory partitions can
use the same topology as domain directory partitions. When application and domain directory
partitions are common to the source and destination domain controllers, the KCC does not create a
separate connection for the application directory partition.
A separate topology is not created for the partial replicas that are stored on global catalog servers.
The connections that are needed by a global catalog server to replicate each partial replica of a
domain are part of the topology that is created for each domain.
The routes for the following directory partitions or combinations of directory partitions are
aggregated to arrive at the overall topology:

• Configuration and schema within a site.

• Each writable domain directory partition within a site.

• Each application directory partition within a site.

• Global catalog read-only, partial domain directory partitions within a site.

• Configuration and schema between sites.

• Each writable domain directory partition between sites.

• Each application directory partition between sites.


Global catalog read-only, partial domain directory partitions between

sites.
Replication transport protocols determine the manner in which replication data is transferred over
the network media. Your network environment and server configuration dictates the transports that
you can use. For more information about transports, see “Replication Transports” later in this
section.

Control Replication Latency and Cost


Replication latency is inherent in a multimaster directory service. A period of replication latency
begins when a directory update occurs on an originating domain controller and ends when
replication of the change is received on the last domain controller in the forest that requires the
change. Generally, the latency that is inherent in a WAN link is relative to a combination of the

106
speed of the connection and the available bandwidth. Replication cost is an administrative value
that can be used to indicate the latency that is associated with different replication routes between
sites. A lower-cost route is preferred by the ISTG when generating the replication topology.
Site topology is the topology as represented by the physical network: the LANs and WANs that
connect domain controllers in a forest. The replication topology is built to use the site topology. The
site topology is represented in Active Directory by site objects and site link objects. These objects
influence Active Directory replication to achieve the best balance between replication speed and the
cost of bandwidth utilization by distinguishing between replication that occurs within a site and
replication that must span sites. When the KCC creates replication connections between domain
controllers to generate the replication topology, it creates more connections between domain
controllers in the same site than between domain controllers in different sites. The results are lower
replication latency within a site and less replication bandwidth utilization between sites.
Within sites, replication is optimized for speed as follows:
Connections between domain controllers in the same site are always arranged in a ring, with

possible additional connections to reduce latency.
Replication within a site is triggered by a change notification mechanism when an update occurs,

moderated by a short, configurable delay (because groups of updates frequently occur together).
• Data is sent uncompressed, and thus without the processing overhead of data compression.
Between sites, replication is optimized for minimal bandwidth usage (cost) as follows:

• Replication data is compressed to minimize bandwidth consumption over WAN links.


Store-and-forward replication makes efficient use of WAN links — each update crosses an

expensive link only once.
Replication occurs at intervals that you can schedule so that use of expensive WAN links is

managed.
The intersite topology is a layering of spanning trees (one intersite connection between any two

sites for each directory partition) and generally does not contain redundant connections.

Route Replication Between Sites


The KCC uses the information in Active Directory to identify the least-cost routes for replication
between sites. If a domain controller is unavailable at the time the replication topology is created,
making replication through that site impossible, the next least-cost route is used. This rerouting is
automatic when site links are bridged (transitive), which is the default setting.
Replication is automatically routed around network failures and offline domain controllers.

Effect Client Affinity


Active Directory clients locate domain controllers according to their site affiliation. Domain
controllers register SRV resource records in the DNS database that map the domain controller to a
site. When a client requests a connection to a domain controller (for example, when logging on to a
domain computer), the domain controller Locator uses the site SRV resource record to locate a
domain controller with good connectivity whenever possible. In this way, a client locates a domain
controller within the same site, thereby avoiding communications over WAN links.
Sites can also be used by certain applications, such as DFS, to ensure that clients locate servers
that are within the site or, if none is available, a server in the next closest site. If the ISTG is
running Windows Server 2003, you can specify an alternate site based on connection cost if no
same-site servers are available. This DFS feature, called “site costing,” is new in Windows
Server 2003.

107
For more information about the domain controller Locator, see “DNS Support for Active Directory
Technical Reference.” For more information about DFS site costing, see “DFS Technical Reference.”
Top of page
Topology-Related Objects in Active Directory
Active Directory stores replication topology information in the configuration directory partition.
Several configuration objects define the components that are required by the KCC to establish and
implement the replication topology.
Active Directory Sites and Services is the Microsoft Management Console (MMC) snap-in that you
can use to view and manage the hierarchy of objects that are used by the KCC to construct the
replication topology. The hierarchy is displayed as the contents of the Sites container, which is a
child object of the Configuration container. The Configuration container is not identified in the Active
Directory Sites and Services UI. The Sites container contains an object for each site in the forest. In
addition, Sites contains the Subnets container, which contains subnet definitions in the form of
subnet objects.
The following figure shows a sample hierarchy, including two sites: Default-First-Site-Name and
Site A. The selected NTDS Settings object of the server MHDC3 in the site Default-First-Site-Name
displays the inbound connections from MHDC4 in the same site and from A-DC-01 in Site A. In
addition to showing that MHDC3 and MHDC4 perform intrasite replication, this configuration
indicates that MHDC3 and A-DC-01 are bridgehead servers that are replicating the same domain
between Site A and Default-First-Site-Name.
Sites Container Hierarchy

108
Site and Subnet Objects
Sites are effective because they map to specific ranges of subnet addresses, as identified in Active
Directory by subnet objects. The relationship between sites and subnets is integral to Active
Directory replication.
Site Objects
A site object (class site) corresponds to a set of one or more IP subnets that have LAN connectivity.
Thus, by virtue of their subnet associations, domain controllers that are in the same site are well
connected in terms of speed. Each site object has a child NTDS Site Settings object and a Servers
container. The distinguished name of the Sites container is
CN=Sites,CN=Configuration,DC=ForestRootDomainName. The Configuration container is the
topmost object in the configuration directory partition and the Sites container is the topmost object
in the hierarchy of objects that are used to manage and implement Active Directory replication.
When you install Active Directory on the first domain controller in the forest, a site object named
Default-First-Site-Name is created in the Sites container in Active Directory.
Subnet Objects
Subnet objects (class subnet) define network subnets in Active Directory. A network subnet is a
segment of a TCP/IP network to which a set of logical IP addresses is assigned. Subnets group
computers in a way that identifies their physical proximity on the network. Subnet objects in Active

109
Directory are used to map computers to sites. Each subnet object has a siteObject attribute that
links it to a site object.
Subnet-to-Site Mapping
You associate a set of IP subnets with a site if they have high-bandwidth LAN connectivity, possibly
involving hops through high-performance routers.
Note
LAN connectivity assumes high-speed, inexpensive bandwidth that allows similar and reliable

network performance, regardless of which two computers in the site are communicating. This
quality of connectivity does not indicate that all servers in the site must be on the same network
segment or that hop counts between all servers must be identical. Rather, it is the measure by
which you know that if a large amount of data needs to be copied from one server to another, it
does not matter which servers are involved. If you find that you are concerned about such
situations, consider creating another site.
When you create subnet objects in Active Directory, you associate them with site objects so that IP
addresses can be localized according to sites. During the process of domain controller location,
subnet information is used to find a domain controller in the same site as, or the site closest to, the
client computer. The Net Logon service on a domain controller is able to identify the site of a client
by mapping the client’s IP address to a subnet object in Active Directory. Likewise, when a domain
controller is installed, its server object is created in the site that contains the subnet that maps to
its IP address.
You can use Active Directory Sites and Services to define subnets, and then create a site and
associate the subnets with the site. By default, only members of the Enterprise Admins group have
the right to create new sites, although this right can be delegated.
In a default Active Directory installation, there is no default subnet object, so potentially a computer
can be in the forest but have an IP subnet that is not contained in any site. For private networks,
you can specify the network addresses that are provided by the Internet Assigned Numbers
Authority (IANA). By definition, that range covers all of the subnets for the organization. However,
where several class B or class C addresses are assigned, there would necessarily be multiple subnet
objects that all mapped to the same default site.
To accommodate this situation, use the following subnets:

• For class B addresses, subnet 128.0.0.0/2 covers all class B addresses.


For class C addresses, subnet 192.0.0.0/3 covers all class C

addresses.
Note
The Active Directory Sites and Services MMC snap-in neither checks nor enforces IP address

mapping when you move a server object to a different site. You must manually change the IP
address on the domain controller to ensure proper mapping of the IP address to a subnet in the
appropriate site.

Server Objects
Server objects (class server) represent server computers, including domain controllers, in the
configuration directory partition. When you install Active Directory, the installation process creates a
server object in the Servers container within the site to which the IP address of the domain
controller maps. There is one server object for each domain controller in the site.
A server object is distinct from the computer object that represents the computer as a security
principal. These objects are in separate directory partitions and have separate globally unique

110
identifiers (GUIDs). The computer object represents the domain controller in the domain directory
partition; the server object represents the domain controller in the configuration directory partition.
The server object contains a reference to the associated computer object.
The server object for the first domain controller in the forest is created in the Default-First-Site-
Name site. When you install Active Directory on subsequent servers, if no other sites are defined,
server objects are created in Default-First-Site-Name. If other sites have been defined and subnet
objects have been associated with these sites, server objects are created as follows:
If additional sites have been defined in Active Directory and the IP address of the installation

computer matches an existing subnet in a defined site, the domain controller is added to that
site.
If additional sites have been defined in Active Directory and the new domain controller's IP

address does not match an existing subnet in one of the defined sites, the new domain
controller's server object is created in the site of the source domain controller from which the
new domain controller receives its initial replication.
When Active Directory is removed from a server, its NTDS Settings object is deleted from Active
Directory, but its server object remains because the server object might contain objects other than
NTDS Settings. For example, when Microsoft Operations Manager or Message Queuing is running on
a domain controller, these applications create child objects beneath the server object.

NTDS Settings Objects


The NTDS Settings object (class nTDSDSA) represents an instantiation of Active Directory on that
server and distinguishes a domain controller from other types of servers in the site or from
decommissioned domain controllers. For a specific server object, the NTDS Settings object contains
the individual connection objects that represent the inbound connections from other domain
controllers in the forest that are currently available to send changes to this domain controller.
Note
The NTDS Settings object should not be manually

deleted.
The hasMasterNCs multivalued attribute (where “NC” stands for “naming context,” a synonym for
“directory partition”) of an NTDS Settings object contains the distinguished names for the set of
writable (non-global-catalog) directory partitions that are located on that domain controller, as
follows:

• DC=Configuration,DC=ForestRootDomainName
DC=Schema,DC=Configuration,DC=ForestRootDomainNam

e
• DC=DomainName,DC=ForestRootDomainName
The msDSHasMasterNCs attribute is new in Windows Server 2003, and this attribute of the NTDS
Settings object contains values for the above-named directory partitions as well as any application
directory partitions that are stored by the domain controller. Therefore, on domain controllers that
are DNS servers and use Active Directory–integrated DNS zones, the following values appear in
addition to the default directory partitions:
DC=ForestDNSZones,DC=ForestRootDomainName (domain controllers in the forest root domain

only)
• DC=DomainDNSZones,DC=DomainName,DC=ForestRootDomainName (all domain controllers)

111
Applications that need to retrieve the list of all directory partitions that are hosted by a domain
controller can be updated or written to use the msDSHasMasterNCs attribute. Applications that
need to retrieve only domain directory partitions can continue to use the hasMasterNCs attribute.
For more information about these attributes, see Active Directory in the Microsoft Platform SDK on
MSDN.

Connection Objects
A connection object (class nTDSConnection) defines a one-way, inbound route from one domain
controller (the source) to the domain controller that stores the connection object (the destination).
The KCC uses information in cross-reference objects to create the appropriate connection objects,
which enable domain controllers that store the same directory partitions to replicate with each
other. The KCC creates connections for every server object in the Sites container that has an NTDS
Settings object.
The connection object is a child of the replication destination’s NTDS Settings object, and the
connection object references the replication source domain controller in the fromServer attribute
on the connection object — that is, it represents the inbound half of a connection. The connection
object contains a replication schedule and specifies a replication transport. The connection object
schedule is derived from the site link schedule for intersite connections. For more information about
intersite connection schedules, see “Connection Object Schedule” later in this section.
A connection is unidirectional; a bidirectional replication connection is represented as two inbound
connection objects. The KCC creates one connection object under the NTDS Settings object of each
server that is used as an endpoint for the connection.
Connection objects are created in two ways:

• Automatically by the KCC.


Manually by a directory administrator by using Active Directory Sites and Services, ADSI Edit, or

scripts.
Intersite connection objects are created by the KCC that has the role of intersite topology generator
(ISTG) in the site. One domain controller in each site has this role, and the ISTG role owners in all
sites use the same algorithm to collectively generate the intersite replication topology.
Ownership of Connection Objects
Connections that are created automatically by the KCC are “owned” by the KCC. If you create a new
connection manually, the connection is not owned by the KCC. If a connection object is not owned
by the KCC, the KCC does not modify it or delete it.
Note
One exception to this modification rule is that the KCC automatically changes the transport type

of an administrator-owned connection if the transportType attribute is set incorrectly (see
“Transport Type” later in this section).
However, if you modify a connection object that is owned by the KCC (for example, you change the
connection object schedule), the ownership of the connection depends on the application that you
use to make the change:
If you use an LDAP editor such as Ldp.exe or Adsiedit.msc to change a connection object

property, the KCC reverses the change the next time it runs.
If you use Active Directory Sites and Services to change a connection object property, the object

is changed from automatic to manual and the KCC no longer owns it. The UI indicates the
ownership status of each connection object.
In most Active Directory deployments, manual connection objects are not needed.

112
If you create a connection object, it remains until you delete it, but the KCC will automatically
delete duplicate KCC-owned objects if they exist and will continue to create needed connections.
Ownership of a connection object does not affect security access to the object; it determines only
whether the KCC can modify or delete the object.
Note
If you create a new connection object that duplicates one that the KCC has already created, your

duplicate object is created and the KCC-created object is deleted by the KCC the next time it
runs.
ISTG and Modified Connections
Because connection objects are stored in the configuration directory partition, it is possible for an
intersite connection object to be modified by an administrator on one domain controller and, prior to
replication of the initial change being received, to be modified by the KCC on another domain
controller. Overwriting such a change can occur within the local site or when a connection object
changes in a remote site.
By default, the KCC runs every 15 minutes. If the administrative connection object change is not
received by the destination domain controller before the ISTG in the destination site runs, the ISTG
in the destination site might modify the same connection object. In this case, ownership of the
connection object belongs to the KCC because the latest write to the connection object is the write
that is applied.
Manual Connection Objects
The KCC is designed to produce a replication topology that provides low replication latency, that
adapts to failures, and that does not need modification. It is usually not necessary to create
connection objects when the KCC is being used to generate automatic connections. The KCC
automatically reconfigures connections as conditions change. Adding manual connections when the
KCC is employed potentially increases replication traffic by adding redundant connections to the
optimal set chosen by the KCC. When manually generated connections exist, the KCC uses them
wherever possible.
Adding extra connections does not necessarily reduce replication latency. Within a site, latency
issues are usually related to factors other than the replication topology that is generated by the
KCC. Factors that affect latency include the following:
Interruption of the service of key domain controllers, such as the primary domain controller

(PDC) emulator, global catalog servers, or bridgehead servers.
• Domain controllers that are too busy to replicate in a timely manner (too few domain controllers).

• Network connectivity issues.

• DNS server problems.

• Inordinate amounts of directory updates.


For problems such as these, creating a manual connection does not improve replication latency.
Adjusting the scheduling and costs that are assigned to the site link is the best way to influence
intersite topology.

Site Link Objects


For a connection object to be created on a destination domain controller in one site that specifies a
source domain controller in another site, you must manually create a site link object (class
siteLink) that connects the two sites. Site link objects identify the transport protocol and
scheduling required to replicate between two or more sites. You can use Active Directory Sites and

113
Services to create the site links. The KCC uses the information stored in the properties of these site
links to create the intersite topology connections.
A site link is associated with a network transport by creating the site link object in the appropriate
transport container (either IP or SMTP). All intersite domain replication must use IP site links. The
Simple Mail Transfer Protocol (SMTP) transport can be used for replication between sites that
contain domain controllers that do not host any common domain directory partition replicas.
Site Link Properties
A site link specifies the following:

• Two or more sites that are permitted to replicate with each other.
An administrator-defined cost value associated with that replication path. The cost value controls

the route that replication takes, and thus the remote sites that are used as sources of replication
information.
• A schedule during which replication is permitted to occur.
An interval that determines how frequently replication occurs over this site link during the times

when the schedule allows replication.
For more information about site link properties, see “Site Link Settings and Their Effects on Intersite
Replication” later in this section.
Default Site Link
When you install Active Directory on the first domain controller in the forest, an object named
DEFAULTIPSITELINK is created in the Sites container (in the IP container within the Inter-Site
Transports container). This site link contains only one site, Default-First-Site-Name.
Site Link Bridging
By default, site links for the same IP transport that have sites in common are bridged, which
enables the KCC to treat the set of associated site links as a single route. If you categorically do not
want the KCC to consider some routes, or if your network is not fully routed, you can disable
automatic bridging of all site links. When this bridging is disabled, you can create site link bridge
objects and manually add site links to a bridge. For more information about using site link bridges,
see “Bridging Site Links Manually” later in this section.

NTDS Site Settings Object


NTDS Site Settings objects (class nTDSSiteSettings) identify site-wide settings in Active Directory.
There is one NTDS Site Settings object per site in the Sites container. NTDS Site Settings attributes
control the following features and conditions:
The identity of the ISTG role owner for the site. The KCC on this domain controller is responsible

for identifying bridgehead servers. For more information about this role, see “Automated Intersite
Topology Generation” later in this section.
Whether domain controllers in the site cache membership of universal groups and the site in

which to find a global catalog server for creating the cache.
The default schedule that applies to connection objects. For more information about this

schedule, see “Connection Object Schedule” later in this section.
Note
To allow for the possibility of network failure, which might cause one or more notifications to

be missed, a default schedule of once per hour is applied to replication within a site. You do
not need to manage this schedule.

Cross-Reference Objects
Cross-reference objects (class crossRef) store the location of directory partitions in the Partitions
container (CN=Partitions,CN=Configuration,DC=ForestRootDomainName). The contents of the

114
Partitions container are not visible by using Active Directory Sites and Services, but can be viewed
by using Adsiedit.msc to view the Configuration directory partition.
Active Directory replication uses cross-reference objects to locate the domain controllers that store
each directory partition. A cross-reference object is created during Active Directory installation to
identify each new directory partition that is added to the forest. Cross-reference objects store the
identity (nCName, the distinguished name of the directory partition where “NC” stands for “naming
context,” a synonym for “directory partition”) and location (dNSRoot, the DNS domain where
servers that store the particular directory partition can be reached) of each directory partition.
Note
In Windows Server 2003 Active Directory, a special attribute of the cross-reference object,

msDS-NC-Replica-Locations, identifies application directory partitions to the replication
system. For more information about how application directory partitions are replicated, see
“Topology Generation Phases” later in this section.
Top of page
Replication Transports
Replication transports provide the wire protocols that are required for data transfer. There are three
levels of connectivity for replication of Active Directory information:

• Uniform high-speed, synchronous RPC over IP within a site.


Point-to-point, synchronous, low-speed RPC over IP between

sites.
• Low-speed, asynchronous SMTP between sites.
The following rules apply to the replication transports:

• Replication within a site always uses RPC over IP.

• Replication between sites can use either RPC over IP or SMTP over IP.
Replication between sites over SMTP is supported for only domain controllers of different

domains. Domain controllers of the same domain must replicate by using the RPC over IP
transport. Therefore, replication between sites over SMTP is supported for only schema,
configuration, and global catalog replication, which means that domains can span sites only when
point-to-point, synchronous RPC is available between sites.
The Inter-Site Transports container provides the means for mapping site links to the transport that
the link uses. When you create a site link object, you create it in either the IP container (which
associates the site link with the RPC over IP transport) or the SMTP container (which associates the
site link with the SMTP transport).
For the IP transport, a typical site link connects only two sites and corresponds to an actual WAN
link. An IP site link connecting more than two sites might correspond to an asynchronous transfer
mode (ATM) backbone that connects, for example, more than two clusters of buildings on a large
campus or connects several offices in a large metropolitan area that are connected by leased lines
and IP routers.

Synchronous and Asynchronous Communication


The RPC intersite and intrasite transport (RCP over IP within sites and between sites) and the SMTP
intersite transport (SMTP over IP between sites only) correspond to synchronous and asynchronous
communication methods, respectively. Synchronous communication favors fast, available
connections, while asynchronous communication is better suited for slow or intermittent
connections.
Synchronous Replication Over IP

115
The IP transport (RPC over IP) provides synchronous inbound replication. In the context of Active
Directory replication, synchronous communication implies that after the destination domain
controller sends the request for data, it waits for the source domain controller to receive the
request, construct the reply, and send the reply before it requests changes from any other domain
controllers; that is, inbound replication is sequential. Thus in synchronous transmission, the reply is
received within a short time. The IP transport is appropriate for linking sites in fully routed
networks.
Asynchronous Replication Over SMTP
The SMTP transport (SMTP over IP) provides asynchronous replication. In asynchronous replication,
the destination domain controller does not wait for the reply and it can have multiple asynchronous
requests outstanding at any particular time. Thus in asynchronous transmission, the reply is not
necessarily received within a short time. Asynchronous transport is appropriate for linking sites in
networks that are not fully routed and have particularly slow WAN links.
Note
Although asynchronous replication can send multiple replication requests in parallel, the received

replication packets are queued on the destination domain controller and the changes applied for
only one partner and directory partition at a time.
Replication Queue
Suppose a domain controller has five inbound replication connections. As the domain controller
formulates change requests, either by a schedule being reached or from a notification, it adds a
work item for each request to the end of the queue of pending synchronization requests. Each
pending synchronization request represents one <source domain controller, directory partition>
pair, such as “synchronize the schema directory partition from DC1,” or “delete the ApplicationX
directory partition.”
When a work item has been received into the queue, notification and polling intervals do not apply
— the domain controller processes the item (begins synchronizing from that source) as soon as the
item reaches the front of the queue, and continues until either the destination is fully synchronized
with the source domain controller, an error occurs, or the synchronization is pre-empted by a
higher-priority operation.

SMTP Intersite Replication


When sites are on opposite ends of a WAN link (or the Internet), it is not always desirable — or
even possible — to perform synchronous, RPC-based directory replication. In some cases, the only
method of communication between two sites is e-mail. When connectivity is intermittent or when
end-to-end IP connectivity is not available (an intermediate site does not support RPC/IP
replication), replication must be possible across asynchronous, store-and-forward transports such as
SMTP.
In addition, where bandwidth is limited, it can be disadvantageous to force an entire replication
cycle of request for changes and transfer of changes between two domain controllers to complete
before another can begin (that is, to use synchronous replication). With SMTP, several cycles can be
processing simultaneously so that each cycle is being processed to some degree most of the time,
as opposed to receiving no attention for prolonged periods, which can result in RPC time-outs.
For intersite replication, SMTP replication substitutes mail messaging for the RPC transport. The
message syntax is the same as for RPC-based replication. There is no change notification for SMTP–
based replication, and scheduling information for the site link object is used as follows:
By default, SMTP replication ignores the Replication Available and Replication Not Available

116
settings on the site link schedule in Active Directory Sites and Services (the information that
indicates when these sites are connected). Replication occurs according to the messaging system
schedule.
Within the scope of the messaging system schedule, SMTP replication uses the replication interval

that is set on the SMTP site link to indicate how often the server requests changes. The interval
(Replicate every ____ minutes) is set in 15-minute increments on the General tab in site link
Properties in Active Directory Sites and Services.
The underlying SMTP messaging system is responsible for message routing between SMTP servers.
SMTP Replication and Intersite Messaging
Intersite Messaging is a Windows 2000 Server and Windows Server 2003 component that is enabled
when Active Directory is installed. Intersite Messaging allows for multiple transports to be used as
add-ins to the Intersite Messaging architecture. Intersite Messaging enables messaging
communication that can use SMTP servers other than those that are dedicated to processing e-mail
applications.
When the forest has a functional level of Windows 2000, Intersite Messaging also provides services
to the KCC in the form of querying the available replication paths. In addition, Net Logon queries
the connectivity data in Intersite Messaging when calculating site coverage. By default, Intersite
Messaging rebuilds its database once a day, or when required by a site link change.
When the forest has a functional level of Windows Server 2003, the KCC does not use Intersite
Messaging for calculating the topology. However, regardless of forest functional level, Intersite
Messaging is still required for SMTP replication, DFS, universal group membership caching, and Net
Logon automatic site coverage calculations. Therefore, if any of these features are in use, do not
stop Intersite Messaging.
For more information about site coverage and how automatic site coverage is calculated, see “How
DNS Support for Active Directory Works.” For more information about DFS, see “DFS Technical
Reference.”
Requirements for SMTP Replication
The KCC does not create connections that use SMTP until the following requirements are met:

• Internet Information Services (IIS) is installed on both bridgehead servers.


An enterprise certification authority (CA) is installed and configured on your network. The

certificate authority signs and encrypts SMTP messages that are exchanged between domain
controllers, ensuring the authenticity of directory updates. Specifically, a domain controller
certificate must be present on the replicating domain controllers. The replication request
message, which contains no directory data, is not encrypted. The replication reply message,
which does contain directory data, is encrypted using a key length of 128 bits.
• The sites are connected by SMTP site links.
The site link path between the sites has a lower cost than any IP/RPC site link that can reach the

SMTP site.
You are not attempting to replicate writable replicas of the same domain (although replication of

global catalog partial replicas is supported).
• Each domain controller is configured to receive mail.
You must also determine if mail routing is necessary. If the two replicating domain controllers have
direct IP connectivity and can send mail to each other, no further configuration is required.
However, if the two domain controllers must go through mail gateways to deliver mail to each
other, you must configure the domain controller to use the mail gateway.
Note

117
RPC is required for replicating the domain to a new domain controller and for installing

certificates. If RPC is not available to the remote site, the domain must be replicated and
certificates must be installed over RPC in a hub site and the domain controller then shipped to the
remote site.
Comparison of SMTP and RPC Replication
The following characteristics apply to both SMTP and RPC with respect to Active Directory
replication:

• For replication between sites, data that is replicated through either transport is compressed.
Active Directory can respond with only a fixed (maximum) number of changes per change

request, based on the size of the replication packet. The size of the replication packet is
configurable. For information about configuring the replication packet size, see “Replication
Packet Size” later in this section.
Active Directory can apply a single set of changes at a time for a specific directory partition and

replication partner.
The response data (changes) are transported in one or many frames, based on the total number

of changed or new values.
• TCP transports the data portion by using the same algorithm for both SMTP and RPC.

• If transmission of the data portion fails, complete retransmission is necessary.


Point-to-point synchronous RPC replication is available between sites to allow the flexibility of
having domains that span multiple sites. RPC is best used between sites that are connected by WAN
links because it involves lower latency. SMTP is best used between sites where RPC over IP is not
possible. For example, SMTP can be used by companies that have a network backbone that is not
based on TCP/IP, such as companies that use an X.400 backbone.
Active Directory replication uses both transports to implement a request-response mechanism.
Active Directory issues requests for changes and replies to requests for changes. RPC maps these
requests into RPC requests and RPC replies. SMTP, on the other hand, actually uses long-lived TCP
connections (or X.400-based message transfer agents in non-TCP/IP networks) to deliver streams of
mail in each direction. Thus, RPC transport expects a response to any request immediately and can
have a maximum of one active inbound RPC connection to a directory partition replica at a time.
The SMTP transport expects much longer delays between a request and a response. As a result,
multiple inbound SMTP connections to a directory partition replica can be active at the same time,
provided the requests are all for a different source domain controller or, for the same source domain
controller, a different directory partition. For more information, see “Synchronous and Asynchronous
Communication” earlier in this section.

Replication Packet Size


Replication packet sizes are computed on the basis of memory size unless you have more than
1 gigabyte (GB). By default, the system limits the packet size as follows:
The packet size in bytes is 1/100th the size of RAM, with a minimum of 1 MB and a maximum of

10 MB.
The packet size in objects is 1/1,000,000th the size of RAM, with a minimum of 100 objects and a

maximum of 1,000 objects. For general estimates when this entry is not set, assume an
approximate packet size of 100 objects.
There is one exception: the value of the Replicator async inter site packet size (bytes) registry
entry is always 1 MB if it is not set (that is, when the default value is in effect). Many mail systems
limit the amount of data that can be sent in a mail message (2 MB to 4 MB is common), although
most Windows-based mail systems can handle large 10-MB mail messages.

118
Overriding these memory-based values might be beneficial in advanced bandwidth management
scenarios. You can edit the registry to set the maximum packet size.
Note
If you must edit the registry, use extreme caution. Registry information is provided here as a

reference for use by only highly skilled directory service administrators. It is recommended that
you do not directly edit the registry unless, as in this case, there is no Group Policy or other
Windows tools to accomplish the task. Modifications to the registry are not validated by the
registry editor or by Windows before they are applied, and as a result, incorrect values can be
stored. Storage of incorrect values can result in unrecoverable errors in the system.
Setting the maximum packet size requires adding or modifying entries in the following registry path
with the REG_DWORD data type:
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters. These entries can
be used to determine the maximum number of objects per packet and maximum size of the
packets. The minimum values are indicated as the lowest value in the range.
For RPC replication within a site:

Replicator intra site packet size

(objects)
Range: >=2
Replicator intra site packet size (bytes)

Range: >=10 KB
For RPC replication between sites:

Replicator inter site packet size (objects)

Range: >=2
Replicator inter site packet size (bytes)

Range: >=10 KB
For SMTP replication between sites:

Replicator async inter site packet size (objects)

Range: >=2
Replicator async inter site packet size (bytes)

Range: >=10 KB

Transport Type
The transportType attribute of a connection object specifies which network transport is used when
the connection is used for replication. The transport type receives its value from the distinguished
name of the container in the configuration directory partition that contains the site link over which
the connection occurs, as follows:
Connection objects that use TCP/IP have the transportType value of CN=IP,CN=Inter-Site

Transports,CN=IP,DC=Configuration,DC=ForestRootDomainName.
Connection objects that use SMTP/IP have the transportType value of CN=SMTP,CN=Inter-Site

Transports,CN=IP,DC=Configuration,DC=ForestRootDomainName.
For intrasite connections, transportType has no value; Active Directory Sites and Services

shows the transport of “RPC” for connections that are from servers in the same site.
If you move a domain controller to a different site, the connection objects from servers in the site
from which it was moved remain, but the transport type is blank because it was an intrasite
connection. Because the connection has an endpoint outside of the site, the local KCC in the server’s
new site does not manage the connection. When the ISTG runs, if a blank transport type is found
for a connection that is from a server in a different site, the transportType value is automatically

119
changed to IP. The ISTG in the site determines whether to delete the connection object or to retain
it, in which case the server becomes a bridgehead server in its new site.
Top of page
Replication Between Sites
Replication between sites transfers domain updates when domain controllers for a domain are
located in more than one site. Intersite replication of configuration and schema changes is always
required when more than one site is configured in a forest. Replication between sites is
accomplished by bridgehead servers, which replicate changes according to site link settings.

Bridgehead Servers
When domain controllers for the same domain are located in different sites, at least one bridgehead
server per directory partition and per transport (IP or SMTP) replicates changes from one site to a
bridgehead server in another site. A single bridgehead server can serve multiple partitions per
transport and multiple transports. Replication within the site allows updates to flow between the
bridgehead servers and the other domain controllers in the site. Bridgehead servers help to ensure
that the data replicated across WAN links is not stale or redundant.
Any server that has a connection object with a “from” server in another site is acting as a
destination bridgehead. Any server that is acting as a source for a connection to another site acts as
a source bridgehead.
Note
You can identify a KCC-selected bridgehead server in Active Directory Sites and Services by

viewing connection objects for the server (select the NTDS Settings object below the server
object); if there are connections from servers in a different site or sites, the server represented
by the selected NTDS Settings object is a bridgehead server. If you have Windows Support Tools
installed, you can see all bridgehead servers by using the command repadmin /bridgeheads.
KCC selection of bridgehead servers guarantees bridgehead servers that are capable of replicating
all directory partitions that are needed in the site, including partial global catalog partitions. By
default, bridgehead servers are selected automatically by the KCC on the domain controller that
holds the ISTG role in each site. If you want to identify the domain controllers that can act as
bridgehead servers, you can designate preferred bridgehead servers, from which the ISTG selects
all bridgehead servers. Alternatively, if the ISTG is not used to generate the intersite topology, you
can create manual intersite connection objects on domain controllers to designate bridgehead
servers.
In sites that have at least one domain controller that is running Windows Server 2003, the ISTG can
select bridgehead servers from all eligible domain controllers for each directory partition that is
represented in the site. For example, if three domain controllers in a site store replicas of the same
domain and domain controllers for this domain are also located in three or more other sites, the
ISTG can spread the inbound connection objects from those sites among all three domain
controllers, including those that are running Windows 2000 Server.
In Windows 2000 forests, a single bridgehead server per directory partition and per transport is
designated as the bridgehead server that is responsible for intersite replication of that directory
partition. Therefore, for the preceding example, only one of the three domain controllers would be
designated by the ISTG as a bridgehead server for the domain, and all four connection objects from
the four other sites would be created on the single bridgehead server. In large hub sites, a single
domain controller might not be able to adequately respond to the volume of replication requests
from perhaps thousands of branch sites.

120
For more information about how the KCC selects bridgehead servers in Windows Server 2003, see
“Bridgehead Server Selection” later in this section.

Compression of Replication Data


Intersite replication is compressed by default. Compressing replication data allows the data to be
transferred over WAN links more quickly, thereby conserving network bandwidth. The cost of this
benefit is an increase in CPU utilization on bridgehead servers.
By default, replication data is compressed under the following conditions:

• Replication of updates between domain controllers in different sites.


Replication of Active Directory to a newly created domain

controller.
A new compression algorithm is employed by bridgehead servers that are running Windows
Server 2003. The new algorithm improves replication speed by operating between two and ten
times faster than the Windows 2000 Server algorithm.
Windows 2000 Server Compression
The compression algorithm that is used by domain controllers that are running Windows 2000
Server achieves a compression ratio of approximately 75% to 85%. The cost of this compression in
terms of CPU utilization can be as high as 50% for intersite Active Directory replication. In some
cases, the CPUs on bridgehead servers that are running Windows 2000 Server can become
overwhelmed with compression requests, compounded by the need to service outbound replication
partners. In a worst case scenario, the bridgehead server becomes so overloaded that it cannot
keep up with outbound replication. This scenario is usually coupled with a replication topology issue
where a domain controller has more outbound partners than necessary or the replication schedule
was overly aggressive for the number of direct replication partners.
Note
If a bridgehead server has too many replication partners, the KCC logs event ID 1870 in the

Directory Service log, indicating the current number of partners and the recommended number of
partners for the domain controller.
Windows Server 2003 Compression
On domain controllers that are running Windows Server 2003, compression quality is comparable to
Windows 2000 but the processing burden is greatly decreased. The Windows Server 2003 algorithm
produces a compression ratio of approximately 60%, which is slightly less compression than is
achieved by the Windows 2000 Server ratio, but which significantly reduces the processing load on
bridgehead servers. The new compression algorithm provides a good compromise by significantly
reducing the CPU load on bridgehead servers, while only slightly increasing the WAN traffic. The
new algorithm reduces the time taken by compression from approximately 60% of replication time
to 20%.
The Windows Server 2003 compression algorithm is used only when both bridgehead servers are
running Windows Server 2003. If a bridgehead server that is running Windows Server 2003
replicates with a bridgehead server that is running Windows 2000 Server, then the Windows 2000
compression algorithm is used.
Reverting to Windows 2000 Compression
For slow WAN links (for example, 64 KB or less), if more compression is preferable to a decrease in
computation time, you can change the compression algorithm to the Windows 2000 algorithm. The
compression algorithm is controlled by the REG_DWORD registry entry
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters\Replicator

121
compression algorithm. By editing this registry entry, you can change the algorithm that is used
for compression to the Windows 2000 algorithm.
Note
If you must edit the registry, use extreme caution. Registry information is provided here as a

reference for use by only highly skilled directory service administrators. It is recommended that
you do not directly edit the registry unless, as in this case, there is no Group Policy or other
Windows tools to accomplish the task. Modifications to the registry are not validated by the
registry editor or by Windows before they are applied, and as a result, incorrect values can be
stored. Storage of incorrect values can result in unrecoverable errors in the system.
The default value is 3, which indicates that the Windows Server 2003 algorithm is in effect. By
changing the value to 2, the Windows 2000 algorithm is used for compression. However, switching
to the Windows 2000 algorithm is not recommended unless both bridgehead domain controllers
serve relatively few branches and have ample CPU (for example, > dual processor 850 megahertz
[MHz]).

Site Link Settings and Their Effects on Intersite Replication


In Active Directory Sites and Services, the General tab of the site link Properties contains the
following options for configuring site links to control the replication topology:

• A list of two or more sites to be connected.


A single numeric cost that is associated with communication over the link. The default cost

is 100, but you can assign higher cost values to represent more expensive transmission. For
example, sites that are connected by low-speed or dial-up connections would have high-cost site
links between them. Sites that are well connected through backbone lines would have low-cost
site links. Where multiple routes or transports exist between two sites, the least expensive route
and transport combination is used.
A schedule that determines days and hours during which replication can occur over the link (the

link is available). For example, you might use the default (100 percent available) schedule on
most links, but block replication traffic during peak business hours on links to certain branches.
By blocking replication, you give priority to other traffic, but you also increase replication latency.
Note
Scheduling information is ignored by site links that use SMTP transports; the mail is stockpiled

and then exchanged at the times that are configured for your mail infrastructure.
An interval in minutes that determines how often replication can occur (default is every

180 minutes, or 3 hours). The minimum interval is 15 minutes. If the interval exceeds the time
allowed by the schedule, replication occurs once at the scheduled time.
A site can be connected to other sites by any number of site links. For example, a hub site has site
links to each of its branch sites. Each site that contains a domain controller in a multisite directory
must be connected to at least one other site by at least one site link; otherwise, it cannot replicate
with domain controllers in any other site.
The following diagram shows two sites that are connected by a site link. Domain controllers DC1
and DC2 belong to the same domain and are acting as partner bridgehead servers. When topology
generation occurs, the ISTG in each site creates an inbound connection object on the bridgehead
server in its site from the bridgehead server in the opposite site. With these objects in place,
replication can occur according to the settings on the SB site link.
Connections Between Domain Controllers in Two Sites that Are Connected by a Site Link

122
Site Link Cost
The ISTG uses the cost settings on site links to determine the route of replication between three or
more sites that replicate the same directory partition. The default cost value on a site link object
is 100. You can assign lower or higher cost values to site links to favor inexpensive connections over
expensive connections, respectively. Certain applications and services, such as domain controller
Locator and DFS, also use site link cost information to locate nearest resources. For example, site
link cost can be used to determine which domain controller is contacted by clients located in a site
that does not include a domain controller for the specified domain. The client contacts the domain
controller in a different site according to the site link that has the lowest cost assigned to it.
Cost is usually assigned not only on the basis of the total bandwidth of the link, but also on the
availability, latency, and monetary cost of the link. For example, a 128-kilobits per second (Kbps)
permanent link might be assigned a lower cost than a dial-up 128-Kbps dual ISDN link because the
dial-up ISDN link has replication latency-producing delay that occurs as the links are being
established or removed. Furthermore, in this example, the permanent link might have a fixed
monthly cost, whereas the ISDN line is charged according to actual usage. Because the company is
paying up-front for the permanent link, the administrator might assign a lower cost to the
permanent link to avoid the extra monetary cost of the ISDN connections.
The method used by the ISTG to determine the least-cost path from each site to every other site for
each directory partition is more efficient when the forest has a functional level of Windows
Server 2003 than it is at other levels. For more information about how the KCC computes replication
routes, see “Automated Intersite Topology Generation” later in this section. For more information
about domain controller location, see “How DNS Support for Active Directory Works.”
Transitivity and Automatic Site Link Bridging
By default, site links are transitive, or “bridged.” If site A has a common site link with site B, site B
also has a common site link with Site C, and the two site links are bridged, domain controllers in
site A can replicate directly with domain controllers in site C under certain conditions, even though
there is no site link between site A and site C. In other words, the effect of bridged site links is that
replication between sites in the bridge is transitive.
The setting that implements automatic site link bridges is Bridge all site links, which is found in
Active Directory Sites and Services in the properties of the IP or SMTP intersite transport containers.
The default bridging of site links occurs automatically and no directory object represents the default
bridge. Therefore, in the common case of a fully routed IP network, you do not need to create any
site link bridge objects.
Transitivity and Rerouting
For a set of bridged site links, where replication schedules in the respective site links overlap
(replication is available on the site links during the same time period), connection objects can be
automatically created, if needed, between sites that do not have site links that connect them
directly. All site links for a specific transport implicitly belong to a single site link bridge for that
transport.
Site link transitivity enables the KCC to re-route replication when necessary. In the next diagram, a
domain controller that can replicate the domain is not available in Seattle. In this case, because the

123
site links are transitive (bridged) and the schedules on the two site links allow replication at the
same time, the KCC can re-route replication by creating connections between DC3 in Portland and
DC2 in Boston. Connections between domain controllers in Portland and Boston might also be
created when a domain controller in Portland is a global catalog server, but no global catalog server
exists in the Seattle site and the Boston site hosts a domain that is not present in the Seattle site.
In this case, connections can be created between Portland and Boston to replicate the global catalog
partial, read-only replica.
Note
Overlapping schedules are required for site link transitivity, even when Bridge all site links is

enabled. In the example, if the site link schedules for SB and PS do not overlap, no connections
are possible between Boston and Portland.
Transitive Replication when Site Links Are Bridged, Schedules Overlap, and Replication
Must Be Rerouted

In the preceding diagram, creating a third site link to connect the Boston and Portland sites is
unnecessary and counterproductive because of the way that the KCC uses cost to route replication.
In the configuration that is shown, the KCC uses cost to choose either the route between Portland
and Seattle or the route between Portland and Boston. If you wanted the KCC to use the route
between Portland and Boston, you would create a site link between Portland and Boston instead of
the site link between Portland and Seattle.
Aggregated Site Link Cost and Routing
When site links are bridged, the cost of replication from a domain controller at one end of the bridge
to a domain controller at the other end is the sum of the costs on each of the intervening site links.
For this reason, if a domain controller in an interim site stores the directory partition that is being
replicated, the KCC will route replication to the domain controller in the interim site rather than to
the more distant site. The domain controller in the more distant site in turn receives replication from
the interim site (store-and-forward replication). If the schedules of the two site links overlap, this
replication occurs in the same period of replication latency.
The following diagram illustrates an example where two site links connecting three sites that host
the same domain are bridged automatically (Bridge all site links is enabled). The aggregated cost
of directly replicating between Portland and Boston illustrates why the KCC routes replication from
Portland to Seattle and from Seattle to Boston in a store-and-forward manner. Given the choice
between replicating at a cost of 4 from Seattle or a cost of 7 from Boston, the ISTG in Portland
chooses the lower cost and creates the connection object on DC3 from DC1 in Seattle.
Bridged Site Links Routing Replication Between Three Sites According to Cost

124
In the preceding diagram, if DC3 in Portland needs to replicate a directory partition that is hosted
on DC2 in Boston but not by any domain controller in Seattle, or if the directory partition is hosted
in Seattle but the Seattle site cannot be reached, the ISTG creates the connection object from DC2
to DC3.
Significance of Overlapping Schedules
In the preceding diagram, to replicate the same domain that is hosted in all three sites, the Portland
site replicates directly with Seattle and Seattle replicates directly with Boston, transferring
Portland’s changes to Boston, and vice versa, through store-and-forward replication. Whether the
schedules overlap has the following effects:
PS and SB site link schedules have replication available during at least one common hour of the

schedule:
Replication between these two sites occurs in the same period of replication latency, being

routed through Seattle.
• If Seattle is unavailable, connections can be created between Portland and Boston.
PS and SB site link schedules have no common time:

Replication of changes between Portland and Boston reach their destination in the next period

of replication latency after reaching Seattle.
If Seattle is unavailable, no connections are possible between Portland and Boston.

Note
If Bridge all site links is disabled, a connection is never created between Boston and Portland,
regardless of schedule overlap, unless you manually create a site link bridge.
Site Link Changes and Replication Path
The path that replication takes between sites is computed from the information that is stored in the
properties of the site link objects. When a change is made to a site link setting, the following events
must occur before the change takes effect:
The site link change must replicate to the ISTG of each site by using the previous replication

topology.
• The KCC must run on each ISTG.
As the path of connections is transitively figured through a set of site links, the attributes (settings)
of the site link objects are combined along the path as follows:

• Costs are added together.


The replication interval is the maximum of the intervals that are set for the site links along the

path.

125
The options, if any are set, are computed by using the AND operation.

Note
Options are the values of the options attribute on the site link object. The value of this

attribute determines special behavior of the site link, such as reciprocal replication and
intersite change notification.
Thus the site link schedule is the overlap of all of the schedules of the subpaths. If none of the
schedules overlap, the path is not used.
Bridging Site Links Manually
If your IP network is composed of IP segments that are not fully routed, you can disable Bridge all
site links for the IP transport. In this case, all IP site links are considered nontransitive, and you
can create and configure site link bridge objects to model the actual routing behavior of your
network. A site link bridge has the effect of providing routing for a disjoint network (networks that
are separate and unaware of each other). When you add site links to a site link bridge, all site links
within the bridge can route transitively.
A site link bridge object represents a set of site links, all of whose sites can communicate through
some transport. Site link bridges are necessary if both of the following conditions are true:
A site contains a domain controller that hosts a domain directory partition that is not hosted by a

domain controller in an adjacent site (a site that is in the same site link).
That domain directory partition is hosted on a domain controller in at least one other site in the

forest.
Note
Site link bridge objects are used by the KCC only when the Bridge all site links setting is

disabled. Otherwise, site link bridge objects are ignored.
Site link bridges can also be used to diminish potentially high CPU overhead of generating a large
transitive replication topology. In very large networks, transitive site links can be an issue because
the KCC considers every possible connection in the bridged network, and selects only one.
Therefore, in a Windows 2000 forest that has a very large network or a Windows Server 2003 forest
that consists of an extremely large hub-and-spoke topology, you can reduce KCC-related CPU
utilization and run time by turning off Bridge all site links and creating manual site link bridges
only where they are required.
Note
Turning off Bridge all site links might affect the ability of DFS clients to locate DFS servers in

the closest site. An ISTG that is running Windows Server 2003 relies on the Bridge all site links
setting being turned on to generate the intersite cost matrix that DFS requires for its site-costing
functionality. An ISTG running Windows Server 2003 with Service Pack 1 (SP1) can accommodate
the DFS requirements with Bridge all site links turned off. For more information about turning
off this functionality while accommodating DFS, see "DFS Site Costing and Windows Server 2003
SP1 Site Options" later in this section. For more information about site link cost and DFS, see
“DFS Technical Reference.”
You create a site link bridge object for a specific transport by specifying two or more site links for
the specified transport.
Requirements for manual site link bridges
Each site link in a manual site link bridge must have at least one site in common with another site
link in the bridge. Otherwise, the bridge cannot compute the cost from sites in one link to the sites
in other links of the bridge. If bridgehead servers that are capable of the transport that is used by
the site link bridge are not available in two linked sites, a route is not available.
Manual site link bridge behavior

126
Separate site link bridges, even for the same transport, are independent. To illustrate this
independence, consider the following conditions:

• Four sites have domain controllers for the same domain: Portland, Seattle, Detroit, and Boston.
Three site links are configured: Portland-Seattle (PS), Seattle-Detroit (SD), and Detroit-Boston

(DB).
Two separate manual site link bridges link the outer site links PS and DB with the inner site link

SD.
The presence of the PS-SD site link bridge means that an IP message can be sent transitively from
the Portland site to the Detroit site with cost 4 + 3 = 7. The presence of the SD-DB site link bridge
means that an IP message can be sent transitively from Seattle to Boston at a cost of 3 + 2 = 5.
However, because there is no transitivity between the PS-SB and SB-DB site link bridges, an IP
message cannot be sent between Portland and Boston with cost 4 + 3 + 2 = 9, or at any cost.
In the following diagram, the two manual site link bridges means that Boston is able to replicate
directly only with Detroit and Seattle, and Portland is able to replicate directly only with Seattle and
Detroit.
Note
If you need direct replication between Portland and Detroit, you can create the PS-SB-DB site link

bridge. By excluding the PS site link, you ensure that connections are neither created nor
considered by the KCC between Portland and Detroit.
Two Site Link Bridges that Are Not Transitive

In the diagram, connection objects are not possible between DC4 in Detroit and DC3 in Portland
because two site link bridges are not transitive. For connection objects to be possible between DC3
and DC4, the site link DB must be added to the PS-SB site link bridge. In this case, the cost of
replication between DC3 and DC4 is 9.
Note
Cost is applied differently to a site link bridge than to a site link that contains more than two

sites. To use the preceding example, if Seattle, Boston, and Portland are all in the same site link,
the cost of replication between any of the two sites is the same.
Bridging site links manually is generally recommended for only large branch office deployments. For
more information about using manual site link bridging, see the “Windows Server 2003 Active
Directory Branch Office Deployment Guide.”
Site Link Schedule
Replication using the RPC transport between sites is scheduled. The schedule specifies one or many
time periods during which replication can occur. For example, you might schedule a site link for a

127
dial-up line to be available during off-peak hours (when telephone rates are low) and unavailable
during high-cost regular business hours. The schedule attribute of the site link object specifies the
availability of the site link. The default setting is that replication is always available.
Note
The Ignore schedules setting on the IP container is equivalent to replication being always

available. If Ignore schedules is selected, replication occurs at the designated intervals but
ignores any schedule.
If replication goes through multiple site links, there must be at least one common time period
(overlap) during which replication is available; otherwise, the connection is treated as not available.
For example, if site link AB has a schedule of 18:00 hours to 24:00 hours and site link BC has a
schedule of 17:00 hours to 20:00 hours, the resulting overlap is 18:00 hours through 20:00 hours,
which is the intersection of the schedules for site link AB and site link BC. During the time in which
the schedules overlap, replication can occur from site A to site C even if a domain controller in the
intermediate site B is not available. If the schedules do not overlap, replication from the
intermediate site to the distant site continues when the next replication schedule opens on the
respective site link.
Note
Cost considerations also affect whether connections are created. However, if the site link

schedules do not overlap, the cost is irrelevant.
Scheduling across time zones
When scheduling replication across time zones, consider the time difference to ensure that
replication does not interfere with peak production times in the destination site.
Domain controllers store time in Coordinated Universal Time (UTC). When viewed through the
Active Directory Sites and Services snap-in, time settings in site link object schedules are displayed
according to the local time of the computer on which the snap-in is being run. However, replication
occurs according to UTC.
For example, suppose Seattle adheres to Pacific Standard Time (PST) and Japan adheres to Japan
Standard Time (JST), which is 17 hours later. If a schedule is set on a domain controller in Seattle
and the site link on which the schedule is set connects Seattle and Tokyo, the actual time of
replication in Tokyo is 17 hours later.
If the schedule is set to begin replication at 10:00 PM PST in Seattle, the conversion can be
computed as follows:

• Convert 10:00 PM PST to 22:00 PST military time.


Add 8 hours to arrive at 06:00 UTC, the following

day.
• Add 9 hours to arrive at 15:00 JST.

• 15:00 JST converts to 3:00 PM.


Thus, when replication begins at 10:00 o’clock at night in Seattle, it is occurring in Tokyo at
3:00 o’clock in the afternoon the following day. By scheduling replication a few hours later in
Seattle, you can avoid replication occurring during working hours in Japan.
Schedule implementation
The times that you can set in the Schedule setting on the site link are in one-hour increments. For
example, you can schedule replication to occur between 00:00 hours and 01:00 hours, between
01:00 hours and 02:00 hours, and so forth. However, each block in the actual connection schedule
is 15 minutes. For this reason, when you set a schedule of 01:00 hours to 02:00 hours, you can
assume that replication is queued at some point between 01:00 hours and 01:14:59 hours.

128
Note
RPC synchronous inbound replication is serialized so that if the server is busy replicating this

directory partition from another source, replication from a different source does not begin until
the first synchronization is complete. SMTP asynchronous replication is processed serially by
order of arrival, with multiple replication requests queued simultaneously.
Specifically, a replication event is queued at time t + n, where t is the replication interval that is
applied across the schedule and n is a pseudo-random number from 1 minute through 15 minutes.
For example, if the site link indicates that replication can occur from 02:00 hours
through 07:00 hours, and the replication interval is 2 hours (120 minutes), t is 02:00 hours,
04:00 hours, and 06:00 hours. A replication event is queued on the destination domain controller
between 02:00 hours and 02:14:59 hours, and another replication event is queued between
04:00 hours and 04:14:59 hours. Assuming that the first replication event that was queued is
complete, another replication event is queued between 06:00 hours and 06:14:59 hours. If the
synchronization took longer than two hours, the second synchronization would be ignored because
an event is already in the queue.
Replication can extend beyond the end of the schedule. A period of replication latency that starts
before the end of the schedule runs until completion, even if the period is still running when the
schedule no longer allows replication to be available.
Note
The replication queue is shared with other events, and the time at which replication takes place is

approximate. Duplicate replication events are not queued for the same directory partition and
transport.
Connection object schedule
Each connection object has a schedule that controls when (during what hours) and how frequently
(how many times per hour) replication can occur:

• None (no replication)

• Once per hour (the default setting)

• Twice per hour

• Four times per hour


The connection object schedule and interval are derived from one of two locations, depending on
whether it is an intrasite or intersite connection:
Intrasite connections inherit a default schedule from the schedule attribute of the NTDS Site

Settings object. By default, this schedule is always available and has an interval of one hour.
• Intersite connections inherit the schedule and interval from the site link.
Although intrasite replication is prompted by changes, intrasite connection objects inherit a default
schedule so that replication occurs periodically, regardless of whether change notification has been
received. The connection object schedule ensures that intrasite replication occurs if a notification
message is lost, or if notification does not take place, due to network problems or a domain
controller becomes unavailable. The NTDS Site Settings schedule has a minimum replication interval
of 15 minutes. This minimum replication interval is not configurable and determines the smallest
interval that is possible for both intrasite and intersite replication (on a connection object or a site
link, respectively).
For intersite replication, the schedule is configured on the site link object, but the connection object
schedule actually determines replication; that is, the connection object schedule for an intersite

129
connection is derived from the site link schedule, which is applied through the connection object
schedule. Scheduled replication occurs independently of change notification.
Note
You do not need to configure the connection object schedule unless you are creating a manual

intersite replication topology that does not use the KCC automatic connection objects.
The KCC uses a two-step process to compute the schedule of an intersite connection.
1. The schedules of the site links traversed by a connection are merged together.
2. This merged schedule is modified so that it is available at only certain periods. The length of
those periods is equal to the maximum replication interval of the site links traversed by this
connection.
By using Active Directory Sites and Services, you can manually revise the schedule on a connection
object, but such an override is effective for only administrator-owned connection objects.
Replication Interval
For each site link object, you can specify a value for the replication interval (frequency), which
determines how often replication occurs over the site link during the time that the schedule allows.
For example, if the schedule allows replication between 02:00 hours and 04:00 hours, and the
replication interval is set for 30 minutes, replication can occur up to four times during the scheduled
time.
The default replication interval is 180 minutes, or 3 hours. When the KCC creates a connection
between a domain controller in one site and a domain controller in another site, the replication
interval of the connection is the maximum interval along the minimum-cost path of site link objects
from one end of the connection to the other.
Interaction of Replication Schedule and Interval
When multiple site links are required to complete replication for all sites, the replication interval
settings on each site link combine to affect the entire length of the connection between sites. In
addition, when schedules on each site link are not identical, replication can occur only when the
schedules overlap.
Suppose that site A and site B have site link AB, and site B and site C have site link BC. When a
domain controller in site A replicates with a domain controller in site C, it can do so only as often as
the maximum interval that is set for site link AB and site link BC allows. The following table shows
the site link settings that determine how often and during what times replication can occur between
domain controllers in site A, site B, and site C.
Replication Interval and Schedule Settings for Two Site Links

Site Link Replication Interval Schedule

AB 30 minutes 12:00 hours to 04:00 hours

BC 60 minutes 01:00 hours to 05:00 hours


Given these settings, a domain controller in site A can replicate with a domain controller in site B
according to the AB site link schedule and interval, which is once every 30 minutes between the
hours of 12:00 and 04:00. However, assuming that there is no site link AC, a domain controller in
site A can replicate with a domain controller in site C between the hours of 01:00 and 04:00, which
is where the schedules on the two site links intersect. Within that timespan, they can replicate once
every 60 minutes, which is the greater of the two replication intervals.
Top of page
KCC and Topology Generation

130
The Knowledge Consistency Checker (KCC) is a dynamic-link library (DLL) that runs as a distributed
application on every domain controller. The KCC on each domain controller modifies data in its local
instance of the directory in response to forest-wide changes, which are made known to the KCC by
changes to data in the configuration directory partition.
The KCC generates and maintains the replication topology for replication within sites and between
sites by converting KCC-defined and administrator-defined (if any) connection objects into a
configuration that is understood by the directory replication engine. By default, the KCC reviews and
makes modifications to the Active Directory replication topology every 15 minutes to ensure
propagation of data, either directly or transitively, by creating and deleting connection objects as
needed. The KCC recognizes changes that occur in the environment and ensures that domain
controllers are not orphaned in the replication topology.
Operating independently, the KCC on each domain controller uses its own view of the local replica of
the configuration directory partition to arrive at the same intrasite topology. One KCC per site, the
ISTG, determines the intersite replication topology for the site. Like the KCC that runs on each
domain controller within a site, the instances of the ISTG in different sites do not communicate with
each other. They independently use the same algorithm to produce a consistent, well-formed
spanning tree of connections. Each site constructs its own part of the tree and, when all have run, a
working replication topology exists across the enterprise.
The predictability of all KCCs allows scalability by reducing communication requirements between
KCC instances. All KCCs agree on where connections will be formed, ensuring that redundant
replication does not occur and that all parts of the enterprise are connected.
The KCC performs two major functions:
Configures appropriate replication connections (connection objects) on the basis of the existing

cross-reference, server, NTDS settings, site, site link, and site link bridge objects and the current
status of replication.
Converts the connection objects that represent inbound replication to the local domain controller

into the replication agreements that are actually used by the replication engine. These
agreements, called replica links, accommodate replication of a single directory partition from the
source to the destination domain controller.

Intervals at Which the KCC Runs


By default, the KCC runs its first replication topology check five minutes after the domain controller
starts. The domain controller then attempts initial replication with its intrasite replication partners. If
a domain controller is being used for multiple other services, such as DNS, WINS, or DHCP,
extending the replication topology check interval can ensure that all services have started before
the KCC begins using CPU resources.
You can edit the registry to modify the interval between startup and the time the domain controller
first checks the replication topology.
Note
If you must edit the registry, use extreme caution. Registry information is provided here as a

reference for use by only highly skilled directory service administrators. It is recommended that
you do not directly edit the registry unless, as in this case, there is no Group Policy or other
Windows tools to accomplish the task. Modifications to the registry are not validated by the
registry editor or by Windows before they are applied, and as a result, incorrect values can be
stored. Storage of incorrect values can result in unrecoverable errors in the system.

131
Modifying the interval between startup and the time the domain controller first checks the
replication topology requires changing the Repl topology update delay (secs) entry in
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters as appropriate:
Value: Number of seconds to wait between the time Active Directory starts and the KCC runs for

the first time.
• Default: 300 seconds (5 minutes)

• Data type: REG_DWORD


Thereafter, as long as services are running, the KCC on each domain controller checks the
replication topology every 15 minutes and makes changes as necessary.
Modifying the interval at which the KCC performs topology review requires changing the Repl
topology update period (secs) entry in
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters as appropriate:
Value: Number of seconds between KCC topology

updates
• Default: 900 seconds (15 minutes)

• Data type: REG_DWORD

Objects that the KCC Requires to Build the Replication Topology


The following objects, which are stored in the configuration directory partition, provide the
information required by the KCC to create the replication topology:
Cross-reference. Each directory partition in the forest is identified in the Partitions container by

a cross-reference object. The attributes of this object are used by the replication system to locate
the domain controllers that store each directory partition.
• Server. Each domain controller in the forest is identified as a server object in the Sites container.
NTDS Settings. Each server object that represents a domain controller has a child NTDS

Settings object. Its presence identifies the server as having Active Directory installed. The NTDS
Settings object must be present for the server to be considered by the KCC for inclusion in the
replication topology.
Site. The presence of the above objects also indicates to the KCC the site in which each domain

controller is located for replication. For example, the distinguished name of the NTDS Settings
object contains the name of the site in which the server object that represents the domain
controller exists.
Site link. A site link must be available between any set of sites and its schedule and cost

properties evaluated for routing decisions.
Site link bridge. If they exist, site link bridge objects and properties are evaluated for routing

decisions.
If the domain controller is physically located in one site but its server object is configured in a
different site, the domain controller will attempt intrasite replication with a replication partner that
is in the site of its server object. In this scenario, the improper configuration of servers in sites can
affect network bandwidth.
If a site object exists for a site that has no domain controllers, the KCC does not consider the site
when generating the replication topology.

Topology Generation Phases


The KCC generates the replication topology in two phases:
Evaluation. During the evaluation phase, the KCC evaluates the current topology, determines

whether replication failures have occurred with the existing connections, and constructs whatever

132
new connection objects are required to complete the replication topology.
Translation. During the translation phase, the KCC implements, or “translates,” the decisions

that were made during the evaluation phase into agreements between the replication partners.
During this phase, the KCC writes to the repsFrom attribute on the local domain controller (for
intrasite topology) or on all bridgehead servers in a site (for intersite topology) to identify the
replication partners from which each domain controller pulls replication. For more information
about the information in the replication agreement, see “How the Active Directory Replication
Model Works.”
KCC Modes and Scopes
Because individual KCCs do not communicate directly to generate the replication topology, topology
generation occurs within the scope of either a single domain controller or a single site. In
performing the two topology generation phases, the KCC has three modes of operation. The
following table identifies the modes and scope for each mode.
Modes and Scopes of KCC Topology Generation

KCC Mode Performing Domain Scope Description


Controllers

Intrasite All Local Evaluate all servers in a site and create connection
server objects locally on this server from servers in the
same site that are adjacent to this server in the ring
topology.

Intersite One domain controller Local site Evaluate the servers in all sites and create connection
per site that has the objects both locally and on other servers in the site
ISTG role from servers in different sites.

Link All Local Translate connection objects into replica links


translation server (partnerships) for each server relative to each
directory partition that it holds.
Topology Evaluation and Connection Object Generation
The KCC on a destination domain controller evaluates the topology by reading the existing
connection objects. For each connection object, the KCC reads attribute values of the NTDS Settings
object (class nTDSDSA) of the source domain controller (indicated by the fromServer value on the
connection object) to determine what directory partitions its destination domain controller has in
common with the source domain controller.
Topology evaluation for all domain controllers
To determine the connection objects that need to be generated, the KCC uses information stored in
the attributes of the NTDS Settings object that is associated with each server object, as follows:
For all directory partitions, the multivalued attribute hasMasterNCs stores the distinguished

names of all directory partitions that are stored on that domain controller.
For all domain controllers, the value of the options attribute indicates whether that domain

controller is configured to host the global catalog.
The hasPartialReplicaNCs attribute contains the set of partial-replica directory partitions

(global catalog read-only domain partitions) that are located on the domain controller that is
represented by the server object.
Topology evaluation for domain controllers running Windows Server 2003

133
For all domain controllers that are running Windows Server 2003, the msDS-HasDomainNCs
attribute of the NTDS Settings object contains the name of the domain directory partition that is
hosted by the domain controller.
In forests that have the forest functional level of Windows Server 2003 or Windows Server 2003
interim, the following additional information is used by the KCC to evaluate the topology for
application directory partitions and to generate the needed connections:
The linked multivalued attribute msDS-NC-Replica-Locations on cross-reference objects stores

the distinguished names of NTDS Settings objects for all domain controllers that are configured to
host a replica of the corresponding application directory partition.
Note
When you remove Active Directory from a server that hosts an application directory partition,

its corresponding entry in this multivalued attribute is automatically dropped because msDS-
NC-Replica-Locations is a linked attribute.
Application directory partition replica locations are determined by matching the values of the

hasMasterNCs attribute with the values of the msDS-NC-Replica-Locations linked
multivalued attribute of cross-reference objects. The msDS-NC-Replica-Locations attribute
holds distinguished name references to the NTDS Settings objects for domain controllers that
have been configured to store replicas of the application directory partition. The msDS-NC-
Replica-Locations attribute facilitates the enumeration of existing replicas for a given
application directory partition. Connection objects can then be created between the domain
controllers that hold matching replicas.
Be aware that due to replication latency, the configuration of replicas in attribute values does not
guarantee the existence of the replica on a given server. For example, you can designate a domain
controller as a global catalog server by clicking the Global Catalog check box on the NTDS Settings
object properties in Active Directory Sites and Services. However, until all of the partial domain
directory partitions have replicated to that domain controller and global-catalog-specific SRV records
are registered in DNS, it is not a functioning global catalog server (does not advertise as a global
catalog server in DNS). Similarly, observing the NTDS Settings name for a server in the msDS-NC-
Replica-Locations attribute on the cross-reference object does not indicate that the replica has
necessarily been fully replicated to that server.
Connection Translation
All KCCs process their connection objects and translate them into connection agreements, also
called “replica links,” between pairs of domain controllers. At specified intervals, Active Directory
replicates data from the source domain controller to the destination for directory partitions that they
have in common. These replication agreements do not appear in the administrative tools; the
replication engine uses them internally to track the directory partitions that are to be replicated
from specified servers.
For each directory partition that two domain controllers have in common and that matches the full
and partial characteristics of a replication source, the KCC creates (or updates) a replication
agreement on the destination domain controller. Replication agreements take the form of entries for
each source domain controller in the repsFrom attribute on the topmost object of each directory
partition replica. This value is stored and updated locally on the domain controller and is not
replicated. The KCC updates this attribute each time it runs.
For example, suppose a connection object is created between two domain controllers from different
domains. Assuming that neither of these domain controllers is a global catalog server and neither

134
stores an application directory partition, the KCC identifies the only two directory partitions that the
domain controllers have in common — the schema directory partition and the configuration
directory partition. If a connection object links domain controllers in the same domain, at least three
directory partitions are replicated: the schema directory partition, the configuration directory
partition, and the domain directory partition.
In contrast, if the connection object that is created establishes replication between two domain
controllers that are global catalog servers, then in addition to the directory partitions the domain
controllers have in common, a partial replica of each additional domain directory partition in the
forest is also replicated between the two domain controllers over the same connection.
For more information about replication agreements, see “How the Active Directory Replication Model
Works.”
Read-only and Writable Replicas
When computing the replication topology, the KCC must consider whether a replica is writable or
read-only. For each potential set of replication partners in the topology, the considerations are as
follows:

• A writable replica can receive updates from a corresponding writable replica.

• A read-only replica can receive updates from a corresponding writable replica.

• A read-only replica can receive updates from a corresponding read-only replica.


A writable replica cannot receive updates from a corresponding read-only

replica.
In Windows 2000 forests, for any one domain directory partition, the KCC calculates two topologies:
one for the writable replicas and one for the read-only replicas. This calculation allows redundant
connections for read-only replicas under certain conditions.
The improved Windows Server 2003 KCC spanning tree algorithm eliminates redundancy that can
occur in Windows 2000. The Windows Server 2003 algorithm computes only one topology with
slightly different behavior for replicating the global catalog. The KCC on a domain controller that is
not a global catalog server does not consider global catalog servers in its calculations for read-only
domain replicas because it never replicates read-only data from a global catalog server.

Automated Intrasite Topology Generation


For replication within a site, a topology is generated and then optimized to minimize the number of
hops to three. The means by which the three-hop minimum is achieved varies according to the
number of domain controllers that are hosted in the site as well as the presence of global catalog
servers. Generally, the intrasite topology is formed in a ring. The topology becomes more complex
as the number of servers increases. However, the KCC can accommodate thousands of domain
controllers in a site.
Simplified Ring Topology Generation
A simplified process for creating the topology for replication within a site begins as follows:

• The KCC generates a list of all servers in the site that hold that directory partition.

• These servers are connected in a ring.


For each neighboring server in the ring from which the current domain controller is to replicate,

the KCC creates a connection object if one does not already exist.
This simple approach guarantees a topology that tolerates a single failure. If a domain controller is
not available, it is not included in the ring that is generated by the list of servers. However, this
topology, with no other adjustments, accommodates only seven servers. Beyond this number, the
ring would require more than three hops for some servers.

135
The simplest case scenario — seven or fewer domain controllers, all in the same domain and site —
would result in the replication topology shown in the following diagram. The only directory partitions
to replicate are a single domain directory partition, the schema directory partition, and the
configuration directory partition. Those topologies are generated first, and at that point, sufficient
connections to replicate each directory partition have already been created.
In the next series of diagrams, the arrows indicate one-way or two-way replication of the type of
directory partitions indicated in the Legend.
Simple Ring Topology that Requires No Optimization

Because a ring topology is created for each directory partition, the topology might look different if
domain controllers from a second domain were present in the site. The next diagram illustrates the
topology for domain controllers from two domains in the same site with no global catalog servers
defined in the site.
Ring Topology for Two Domains in a Site that Has No Global Catalog Server

136
The next diagram illustrates replication between a global catalog server and three domains to which
the global catalog server does not belong. When a global catalog server is added to the site in
DomainA, additional connections are required to replicate updates of the other domain directory
partitions to the global catalog server. The KCC on the global catalog server creates connection
objects to replicate from domain controllers for each of the other domain directory partitions within
the site, or from another global catalog server, to update the read-only partitions. Wherever a
domain directory partition is replicated, the KCC also uses the connection to replicate the schema
and configuration directory partitions.
Note
Connection objects are generated independently for the configuration and schema directory

partitions (one connection) and for the separate domain and application directory partitions,
unless a connection from the same source to destination domain controllers already exists for one
directory partition. In that case, the same connection is used for all (duplicate connections are
not created).
Intrasite Topology for Site with Four Domains and a Global Catalog Server

137
Expanded Ring Topology Within a Site
When the number of servers in a site grows beyond seven, the KCC estimates the number of
connections that are needed so that if a change occurs at any one domain controller, there are as
many replication partners as needed to ensure that no domain controller is more than three
replication hops from another domain controller (that is, a change takes no more than three hops
before it reaches another domain controller that has not already received the change by another
path). These optimizing connections are created at random and are not necessarily created on every
third domain controller.
The KCC adds connections automatically to optimize a ring topology within a site, as follows:
Given a set of nodes in a ring, create the minimum number of connections, n, that each server

must have to ensure a path of no more than three hops to another server.
Given n, topology generation proceeds as follows.
If the local server does not have n extra connections, the KCC does the

following:
Chooses n other servers randomly in the site as source

servers.
• For each of those servers, creates a connection object.
This approach approximates the minimum-hop goal of three servers. In addition, it grows well,
because as the site grows in server count, old optimizing connections are still useful and are not
removed. Also, every time an additional 9 to 11 servers are added, a connection object is deleted at
random; then a new one is created, ideally having one of the new servers as its source. This
process ensures that, over time, the additional connections are distributed well over the entire site.

138
The following diagram shows an intrasite ring topology with optimizing connections in a site that has
eight domain controllers in the same domain. Without optimizing connections, the hop count from
DC1 to DC2 is more than three hops. The KCC creates optimizing connections to limit the hop count
to three hops. The two one-way inbound optimizing connections accommodate all directory
partitions that are replicated between the two domain controllers.
Intrasite Topology with Optimizing Connections

Excluded Nonresponding Servers


The KCC automatically rebuilds the replication topology when it recognizes that a domain controller
has failed or is unresponsive.
The criteria that the KCC uses to determine when a domain controller is not responsive depend
upon whether the server computer is within the site or not. Two thresholds must be reached before
a domain controller is declared “unavailable” by the KCC:
The requesting domain controller must have made n attempts to replicate from the target domain

controller.

• For replication between sites, the default value of n is 1 attempt.


For replication within a site, the following distinctions are made between the two immediate

neighbors (in the ring) and the optimizing connections:
For immediate neighbors, the default value of n is 0 failed attempts. Thus, as soon as an
attempt fails, a new server is tried.
For optimizing connections, the default value of n is 1 failed attempt. Thus, as soon as a
second failed attempt occurs, a new server is tried.
A certain amount of time must have passed since the last successful replication attempt.

• For replication between sites, the default time is 2 hours.
For replication within a site, a distinction is made between the two immediate neighbors (in the

ring) and the optimizing connections:
For immediate neighbors, the default time is 2 hours.
For optimizing connections, the default value is 12 hours.
You can edit the registry to modify the thresholds for excluding nonresponding servers.
Note

139
If you must edit the registry, use extreme caution. Registry information is provided here as a

reference for use by only highly skilled directory service administrators. It is recommended that
you do not directly edit the registry unless, as in this case, there is no Group Policy or other
Windows tools to accomplish the task. Modifications to the registry are not validated by the
registry editor or by Windows before they are applied, and as a result, incorrect values can be
stored. Storage of incorrect values can result in unrecoverable errors in the system.
Modifying the thresholds for excluding nonresponding servers requires editing the following registry
entries in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters, with the
data type REG_DWORD. You can modify these values to any desired value as follows:
For replication between sites, use the following entries:
IntersiteFailuresAllowed

Value: Number of failed attempts
Default: 1
MaxFailureTimeForIntersiteLink (secs)

Value: Time that must elapse before being considered unavailable, in
seconds
Default: 7200 (2 hours)
For optimizing connections within a site, use the following entries:
NonCriticalLinkFailuresAllowed

Value: Number of failed attempts
Default: 1
MaxFailureTimeForNonCriticalLink

Value: Time that must elapse before considered unavailable, in
seconds
Default: 43200 (12 hours)
For immediate neighbor connections within a site, use the following entries:
CriticalLinkFailuresAllowed

Value: Number of failed attempts
Default: 0
MaxFailureTimeForCriticalLink

Value: Time that must elapse before considered unavailable, in
seconds
Default: 7200 (2 hours)
When the original domain controller begins responding again, the KCC automatically restores the
replication topology to its pre-failure condition the next time that the KCC runs.
Fully Optimized Ring Topology Generation
Taking the addition of extra connections, management of nonresponding servers, and growth-
management mechanisms into account, the KCC proceeds to fully optimize intrasite topology
generation. The appropriate connection objects are created and deleted according to the available
criteria.
Note
Connection objects from nonresponding servers are not deleted because the condition is expected

to be transient.

Automated Intersite Topology Generation


To produce a replication topology for hundreds of domains and thousands of sites in a timely
manner and without compromising domain controller performance, the KCC must make the best
possible decision when confronted with the question of which network link to use to replicate a

140
given directory partition between sites. Ideally, connections occur only between servers that contain
the same directory partition(s), but when necessary, the KCC can also use network paths that pass
through servers that do not store the directory partition.
Intersite topology generation and associated processes are improved in Windows Server 2003 in the
following ways:
Improved scalability: A new spanning tree algorithm achieves greater efficiency and scalability

when the forest has a functional level of Windows Server 2003. For more information about this
new algorithm, see “Improved KCC Scalability in Windows Server 2003 Forests” later in this
section.
Less network traffic: A new method of communicating the identity of the ISTG reduces the

amount of network traffic that is produced by this process. For more information about this
method, see “Intersite Topology Generator” later in this section.
Multiple bridgehead servers per site and domain, and initial bridgehead server load balancing: An

improved algorithm provides random selection of multiple bridgehead servers per domain and
transport (the Windows 2000 algorithm allows selection of only one). The load among bridgehead
servers is balanced the first time connections are generated. For more information about
bridgehead server load balancing, see “Windows Server 2003 Multiple Bridgehead Selection” later
in this section.
Factors Considered by the KCC
The spanning tree algorithm used by the KCC that is running as the ISTG to create the intersite
replication topology determines how to connect all the sites that need to be connected with the
minimum number of connections and the least cost. The algorithm must also consider the fact that
each domain controller has at least three directory partitions that potentially require synchronization
with other sites, not all domain controllers store the same partitions, and not all sites host the same
domains.
The ISTG considers the following factors to arrive at the intersite replication topology:

• Location of domain directory partitions (calculate a replication topology for each domain).

• Bridgehead server availability in each site (at least one is available).

• All explicit site links.


With automatic site link bridging in effect, consider all implicit paths as a single path with a

combined cost.
With manual site link bridging in effect, consider the implicit combined paths of only those site

links included in the explicit site link bridges.
With no site link bridging in effect, where the site links represent hops between domain

controllers in the same domain, replication flows in a store-and-forward manner through sites.
Improved KCC Scalability in Windows Server 2003 Forests
KCC scalability is greatly improved in Windows Server 2003 forests over its capacity in
Windows 2000 forests. Windows 2000 forests scale safely to support 300 sites, whereas Windows
Server 2003 forests have been tested to 3,000 sites. This level of scaling is achieved when the
forest functional level is Windows Server 2003. At this forest functional level, the method for
determining the least-cost path from each site to every other site for each directory partition is
significantly more efficient than the method that is used in a Windows 2000 forest or in a Windows
Server 2003 forest that has a forest functional level of Windows 2000.
Windows 2000 Spanning Tree Algorithm
The ability of the KCC to generate the intersite topology in Windows 2000 forests is limited by the
amount of CPU time and memory that is consumed when the KCC computes the replication topology
in large environments that use transitive (bridged) site links. In a Windows 2000 forest, a potential

141
disadvantage of bridging all site links affects only very large networks (generally, greater than
100 sites) where periods of high CPU activity occur every 15 minutes when the KCC runs. By
default, the KCC creates a single bridge for the entire network, which generates more routes that
must be processed than if automatic site link bridging is not used and manual site link bridges are
applied selectively.
In a Windows 2000 forest, or in a Windows Server 2003 forest that has a forest functional level of
Windows 2000, the KCC reviews the comparison of multiple paths to and from every destination and
computes the spanning tree of the least-cost path. The spanning tree algorithm works as follows:
Computes a cost matrix by identifying each site pair (that is, each pair of bridgehead servers in

different sites that store the directory partition) and the cost on the site link connecting each
pair.
Note
This matrix is actually computed by Intersite Messaging and used by the

KCC.
By using the costs computed in the matrix, builds a spanning tree between sites that store the

directory partition.
This method becomes inefficient when there are a large number of sites.
Note
CPU time and memory is not an issue in a Windows 2000 forest as long as the following criteria

apply:
• D is the number of domains in your network

• S is the number of sites in your network

• (1 + D) * S^2 <= 100,000


Windows Server 2003 Spanning Tree Algorithm
A more efficient spanning tree algorithm improves efficiency and scalability of replication topology
generation in Windows Server 2003 forests. When the forest functional level is either Windows
Server 2003 or Windows Server 2003 interim, the improved algorithm takes effect and computes a
minimum-cost spanning tree of connections between the sites that host a particular directory
partition, but eliminates the inefficient cost matrix. Thus, the KCC directly determines the lowest-
cost spanning tree for each directory partition, considering the schema and configuration directory
partitions as a single tree. Where the spanning trees overlap, the KCC generates a single connection
between domain controllers for replication of all common directory partitions.
In a Windows Server 2003 forest, both versions of the KCC spanning tree algorithms are available.
The algorithm for Windows 2000 forests is retained for backwards compatibility with the
Windows 2000 KCC. It is not possible for the two algorithms to run simultaneously in the same
enterprise.
DFS Site Costing and Windows Server 2003 SP1 Site Options
When the forest functional level is Windows Server 2003 or Windows Server 2003 interim and the
ISTG does not use Intersite Messaging to calculate the intersite cost matrix, DFS can still use
Intersite Messaging to compute the cost matrix for its site-costing functionality, provided that the
Bridge all site links option is not turned off. In branch office deployments, where the large
number of sites and site links makes automatic site link bridging too costly in terms of the
replication connections that are generated, the Bridge all site links option is usually turned off on
the IP container (CN=IP,CN=Inter-Site
Transports,CN=Sites,CN=Configuration,DC=ForestRootDomain). In this case, DFS is unable to use
Intersite Messaging to calculate site costs.

142
When the forest functional level is Windows Server 2003 or Windows Server 2003 interim and the
ISTG in a site is running Windows Server 2003 with SP1, you can use a site option to turn off
automatic site link bridging for KCC operation without hampering the ability of DFS to use Intersite
Messaging to calculate the cost matrix. This site option is set by running the command
repadmin /siteoptions W2K3_BRIDGES_REQUIRED. This option is applied to the NTDS Site
Settings object (CN=NTDS Site
Settings,CN=SiteName,CN=Sites,CN=Configuration,DC=ForestRootDomain). When this method is
used to disable automatic site link bridging (as opposed to turning off Bridge all site links),
default Intersite Messaging options enable the site-costing calculation to occur for DFS.
Note
The site option on the NTDS Site Settings object can be set on any domain controller, but it does
not take effect until replication of the change reaches the ISTG role holder for the site.

Intersite Topology Generator


The KCC on the domain controller that has the ISTG role creates the inbound connections on all
domain controllers in its site that require replication with domain controllers in other sites. The sum
of these connections for all sites in the forest forms the intersite replication topology.
A fundamental concept in the generation of the topology within a site is that each server does its
part to create a site-wide topology. In a similar manner, the generation of the topology between
sites depends on each site doing its part to create a forest-wide topology between sites.
ISTG Role Ownership and Viability
The owner of the ISTG role is communicated through normal Active Directory replication. Initially,
the first domain controller in the site is the ISTG role owner. It communicates its role ownership to
other domain controllers in the site by writing the distinguished name of its child NTDS Settings
object to the interSiteTopologyGenerator attribute of the NTDS Site Settings object for the site.
As a change to the configuration directory partition, this value is replicated to all domain controllers
in the forest.
The ISTG role owner is selected automatically. The role ownership does not change unless:

• The current ISTG role owner becomes unavailable.


All domain controllers in the site are running Windows 2000 and one of them is upgraded to

Windows Server 2003.
If at least one domain controller in a site is running Windows Server 2003, the ISTG role is assumed
by a domain controller that is running Windows Server 2003.
The viability of the current ISTG is assessed by all other domain controllers in the site. The need for
a new ISTG in a site is established differently, depending on the forest functional level that is in
effect.
Windows 2000 functional level: At 30-minute intervals, the current ISTG notifies every other

domain controller of its existence and availability by writing the interSiteTopologyGenerator
attribute of the NTDS Site Settings object for the site. The change replicates to every domain
controller in the forest. The KCC on each domain controller monitors this attribute for its site to
verify that it has been written. If a period of 60 minutes elapses without a modification to the
attribute, a new ISTG declares itself.
Windows Server 2003 or Windows Server 2003 interim functional level: Each domain controller

maintains an up-to-dateness vector, which contains an entry for each domain controller that
holds a full replica of any directory partition that the domain controller replicates. On domain
controllers that are running Windows Server 2003, this up-to-dateness vector contains a

143
timestamp that indicates the times that it was last contacted by its replication partners, including
both direct and indirect partners (that is, every domain controller that replicates a directory
partition that is stored by this domain controller). The timestamp is recorded whether or not the
local domain controller actually received any changes from the partner. Because all domain
controllers store the schema and configuration directory partitions, every domain controller is
guaranteed to have the ISTG for its site among the domain controllers in its up-to-dateness
vector.
This timestamp eliminates the need to receive periodic replication of the updated
interSiteTopologyGenerator attribute from the current ISTG. When the timestamp indicates
that the current ISTG has not contacted the domain controller in the last 120 minutes, a new
ISTG declares itself.
The Windows Server 2003 method eliminates the network traffic that is generated by periodically
replicating the interSiteTopologyGenerator attribute update to every domain controller in the
forest.
ISTG Eligibility
When at least one domain controller in a site is running Windows Server 2003, the eligibility for the
ISTG role depends on the operating system of the domain controllers. When a new ISTG is required,
each domain controller computes a list of domain controllers in the site. All domain controllers in the
site arrive at the same ordered list. Eligibility is established as follows:
If no domain controllers in the site are running Windows Server 2003, all domain controllers that

are running Windows 2000 Server are eligible. The list of eligible servers is ordered by GUID.
If at least one domain controller in the site is running Windows Server 2003, all domain

controllers that are running Windows Server 2003 are eligible. In this case, the entries in the list
are sorted first by operating system and then by GUID. In a site in which some domain
controllers are running Windows 2000 Server, domain controllers that are running Windows
Server 2003 remain at the top of the list and use the GUID in the same manner to maintain the
order.
The domain controller that is next in the list of servers after the current owner declares itself the
new ISTG by writing the interSiteTopologyGenerator attribute on the NTDS Site Settings object.
If the current ISTG is temporarily disconnected from the topology, as opposed to being shut down,
and a new ISTG declares itself in the interim, then two domain controllers can temporarily assume
the ISTG role. When the original ISTG resumes replication, it initially considers itself to be the
current ISTG and creates inbound replication connection objects, which results in duplicate intersite
connections. However, as soon as the two ISTGs replicate with each other, the last domain
controller to write the intersiteTopologyGenerator attribute continues as the single ISTG and
removes the duplicate connections.

Bridgehead Server Selection


Bridgehead servers can be selected in the following ways:

• Automatically by the ISTG from all domain controllers in the site.


Automatically by the ISTG from all domain controllers that are identified as preferred bridgehead

servers, if any preferred bridgehead servers are assigned. Preferred bridgehead servers must be
assigned manually.
Manually by creating a connection object on a domain controller in one site from a domain

controller in a different site.
By default, when at least one domain controller in a site is running Windows Server 2003
(regardless of forest functional level), any domain controller that hosts a domain in the site is a

144
candidate to be an ISTG-selected bridgehead server. If preferred bridgehead servers are selected,
candidates are limited to this list. The connections from remote servers are distributed among the
available candidate bridgehead servers in each site. The selection of multiple bridgehead servers per
domain and transport is new in Windows Server 2003. The ISTG uses an algorithm to evaluate the
list of domain controllers in the site that can replicate each directory partition. This algorithm is
improved in Windows Server 2003 to randomly select multiple bridgehead servers per directory
partition and transport. In sites containing only domain controllers that are running Windows 2000
Server, the ISTG selects only one bridgehead server per directory partition and transport.
When bridgehead servers are selected by the ISTG, the ISTG ensures that each directory partition
in the site that has a replica in any other site can be replicated to and from that site or sites.
Therefore, if a single domain controller hosts the only replica of a domain in a specific site and the
domain has domain controllers in another site or sites, that domain controller must be a bridgehead
server in its site. In addition, that domain controller must be able to connect to a bridgehead server
in the other site that also hosts the same domain directory partition.
Note
If a site has a global catalog server but does not contain at least one domain controller of every

domain in the forest, then at least one bridgehead server must be a global catalog server.
Preferred Bridgehead Servers
Because bridgehead servers must be able to accommodate more replication traffic than non-
bridgehead servers, you might want to control which servers have this responsibility. To specify
servers that the ISTG can designate as bridgeheads, you can select domain controllers in the site
that you want the ISTG to always consider as preferred bridgehead servers for the specified
transport. These servers are used exclusively to replicate changes collected from the site to any
other site over that transport. Designating preferred bridgehead servers also serves to exclude
those domain controllers that, for reasons of capability, you do not want to be used as bridgehead
servers.
Depending on the available transports, the directory partitions that must be replicated, and the
availability of global catalog servers, multiple bridgehead servers might be required to replicate full
and partial copies of directory data from one site to another.
The ISTG recognizes preferred bridgehead servers by reading the bridgeheadTransportList
attribute of the server object. When this attribute has a value that is the distinguished name of the
transport container that the server uses for intersite replication (IP or SMTP), the KCC treats the
server as a preferred bridgehead server. For example, the value for the IP transport is
CN=IP,CN=Inter-Site Transports,CN=Sites,CN=Configuration,DC=ForestRootDomainName. You can
use Active Directory Sites and Services to designate a preferred bridgehead server by opening the
server object properties and placing either the IP or SMTP transport into the preferred list, which
adds the respective transport distinguished name to the bridgeheadTransportList attribute of the
server.
The bridgeheadServerListBl attribute of the transport container object is a backlink attribute of
the bridgeheadTransportList attribute of the server object. If the bridgeheadServerListBl
attribute contains the distinguished name of at least one server in a site, then the KCC uses only
preferred bridgehead servers to replicate changes for that site, according to the following rules:
If at least one server is designated as a preferred bridgehead server, updates to the domain

directory partition hosted by that server can be replicated only from a preferred bridgehead
server. If at the time of replication no preferred bridgehead server is available for that directory

145
partition, replication of that directory partition does not occur.
If any bridgehead servers are designated but no domain controller is designated as a preferred

bridgehead server for a specific directory partition that has replicas in another site or sites, the
KCC selects a domain controller to act as the bridgehead server, if one is available that can
replicate the directory partition to the other site or sites.
Therefore, to use preferred bridgehead servers effectively, be sure to:
Assign at least two or more bridgehead servers for each of the following:

• Any domain directory partition that has a replica in any other site.

• Any application directory partition that has a replica in another site.


The schema and configuration directory partitions (one bridgehead server replicates both) if no

domains in the site have replicas in other sites.
If the site has a global catalog server, select the global catalog server as one of the preferred

bridgehead servers.
Windows 2000 Single Bridgehead Selection
In a Windows 2000 forest or in a Windows Server 2003 forest that has a forest functional level of
Windows 2000, the ISTG selects a single bridgehead server per directory partition and transport.
The selection changes only when the bridgehead server becomes unavailable. The following diagram
shows the automatic bridgehead server (BH) selection that occurs in the hub site where all domain
controllers host the same domain directory partition and multiple sites have domain controllers that
host that domain directory partition.
Windows 2000 Single Bridgehead Server in a Hub Site that Serves Multiple Branch Sites

Windows Server 2003 Multiple Bridgehead Selection


When at least one domain controller in a site is running Windows Server 2003 (and thereby
becomes the ISTG), the ISTG begins performing random load balancing of new connections. This
load balancing occurs by default, although it can be disabled.

146
When creating a new connection, the KCC must choose endpoints from the set of eligible
bridgeheads in the two endpoint sites. Whereas in Windows 2000 the ISTG always picks the same
bridgehead for all connections, in Windows Server 2003 it picks one randomly from the set of
possible bridgeheads.
Assuming two out of three of the domain controllers have been added to the site since the ISTG was
upgraded to Windows Server 2003, the following diagram shows the connections that might exist on
domain controllers in the hub site to accommodate the four branch sites that have domain
controllers for the same domain.
Random Bridgehead Server Selection in a Hub Site in which the ISTG Runs Windows
Server 2003

If one or more new domain controllers are added to the hub site, the inbound connections on the
existing bridgehead servers are not automatically re-balanced. The next time it runs, the ISTG
considers the newly added server(s) as follows:
If preferred bridgehead servers are not selected in the site, the ISTG considers the newly added

servers as candidate bridgehead servers and creates new connections randomly if new
connections are needed. It does not remove or replace the existing connections.
If preferred bridgehead servers are selected in the site, the ISTG does not consider the newly

added servers as candidate bridgehead servers unless they are designated as preferred
bridgehead servers.
The initial connections remain in place until a bridgehead server becomes unavailable, at which
point the KCC randomly replaces the connection on any available bridgehead server. That is, the
endpoints do not change automatically for the existing bridgehead servers. In the following
diagram, two new domain controllers are added to the hub site, but the existing connections are not
redistributed.
New Domain Controllers with No New Connections Created

147
Although the ISTG does not rebalance the load among the existing bridgehead servers in the hub
site after the initial connections are created, it does consider the added domain controllers as
candidate bridgehead servers under either of the following conditions:
A new site is added that requires a bridgehead server connection to the hub

site.
• An existing connection to the hub site becomes unavailable.
The following diagram illustrates the inbound connection that is possible on a new domain controller
in the hub site to accommodate a failed connection on one of the existing hub site bridgehead
servers. In addition, a new branch site is added and a new inbound connection can potentially be
created on the new domain controller in the hub site. However, because the selection is random,
there is no guarantee that the ISTG creates the connections on the newly added domain controllers.
Possible New Connections for Added Site and Failed Connection

148
Using ADLB to Balance Hub Site Bridgehead Server Load
In large hub-and-spoke topologies, you can implement the redistribution of existing bridgehead
server connections by using the Active Directory Load Balancing (ADLB) tool (Adlb.exe), which is
included with the “Windows Server 2003 Active Directory Branch Office Deployment Guide.” ADLB
makes it possible to dynamically redistribute the connections on bridgehead servers. This
application works independently of the KCC, but uses the connections that are created by the KCC.
Connections that are manually created are not moved by ADLB. However, manual connections are
factored into the load-balancing equations that ADLB uses.
The ISTG is limited to making modifications in its site, but ADLB modifies both inbound and
outbound connections on eligible bridgehead servers and offers schedule staggering for outbound
connections.
For more information about how bridgehead server load balancing and schedule staggering work
with ADLB, see the “Windows Server 2003 Active Directory Branch Office Planning and Deployment
Guide” on the Web at http://go.microsoft.com/fwlink/?linkID=28523.
Top of page
Network Ports Used by Replication Topology
By default, RPC-based replication uses dynamic port mapping. When connecting to an RPC endpoint
during Active Directory replication, the RPC run time on the client contacts the RPC endpoint
mapper on the server at a well-known port (port 135). The server queries the RPC endpoint mapper
on this port to determine what port has been assigned for Active Directory replication on the server.
This query occurs whether the port assignment is dynamic (the default) or fixed. The client never
needs to know which port to use for Active Directory replication.
Note

• An endpoint comprises the protocol, local address, and port address.

149
In addition to the dynamic port 135, other ports that are required for replication to occur are listed
in the following table.
Port Assignments for Active Directory Replication

Service Name UDP TCP

LDAP 389 389

LDAP 636 (Secure Sockets Layer [SSL])

LDAP 3268 (global catalog)

Kerberos 88 88

DNS 53 53

SMB over IP 445 445


Replication within a domain also requires FRS using a dynamic RPC port.
Setting Fixed Replication Ports Across a Firewall
For each service that needs to communicate across a firewall, there is a fixed port and protocol.
Normally, the directory service and FRS use dynamically allocated ports that require a firewall to
have a wide range of ports open. Although FRS cannot be restricted to a fixed port, you can edit the
registry to restrict the directory service to communicate on a static port.
Note
If you must edit the registry, use extreme caution. Registry information is provided here as a

reference for use by only highly skilled directory service administrators. It is recommended that
you do not directly edit the registry unless, as in this case, there is no Group Policy or other
Windows tools to accomplish the task. Modifications to the registry are not validated by the
registry editor or by Windows before they are applied, and as a result, incorrect values can be
stored. Storage of incorrect values can result in unrecoverable errors in the system.
Restricting the directory service to using a fixed port requires editing the TCP/IP Port registry
entry (REG_DWORD), located in:
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters
Changing this registry entry on a domain controller and restarting it causes the directory service to
use the TCP port named in the registry entry. For example, port 49152 is DWORD=0000c000
(hexadecimal).

150

You might also like