AD Consultants

  • 222 Views
  • Last Post 22 May 2014
SmitaCarneiro posted this 13 May 2014

Would anyone be able to recommend a good consulting group/consultant for AD?

We plan to start on working on a new forest to eventually replace our current one (Server 2008R2 but 2003 domain and forest level)   Thanks,   Smita Carneiro, GCWN

Order By: Standard | Newest | Votes
slavickp posted this 13 May 2014

Take my advise for free: upgrade existing forest, do not replace. Migrations for no apparent reason and managing coexistence is more expensive, by an order of magnitude, with end result that is not much different
Regards
Slav
MCM-DS

show

mi6agent44 posted this 13 May 2014

PM me.



On May 13, 2014, at 7:22 AM, "Carneiro, Smita A." <carneiro@xxxxxxxxxxxxxxxx> wrote:

Would anyone be able to recommend a good consulting group/consultant for AD?

We plan to start on working on a new forest to eventually replace our current one (Server 2008R2 but 2003 domain and forest level)   Thanks,   Smita Carneiro, GCWN


The information contained in this message and any attachments is intended only for the use of the individual or entity to which it is addressed, and may contain information that is PRIVILEGED, CONFIDENTIAL, and exempt from disclosure under applicable law. If you are not the intended recipient, you are prohibited from copying, distributing, or using the information. Please contact the sender immediately by return e-mail and delete the original message from your system.

SmitaCarneiro posted this 13 May 2014

Hi David,

 

We’re at the very beginning stages of talking about upgrading our AD, and the safest method in our opinion would be to build a new AD, to avoid breaking

what we currently have. This is a campus that has a very diverse set of applications.

We’re located in a small town in Indiana.

 

Smita

 

 

show

nick1967 posted this 13 May 2014













Hi Smita,

 

Seems to me like you already made up your mind, but I strongly agree with Slav. Upgrade your existing forest instead of building

a new one. It is a lot easier to make your applications upgrade ready within an existing forest than it is migrating them to a new forest.

 

It will probably take you years before you can really shutdown your current forest, and until then you’ll end up with trust

issues and SID history issues, and access token issues. Yikes.

 

Greetz,

Michiel

 








show

daemonr00t posted this 13 May 2014











Off shore services at great rates 😉



~danny


Sent from my Windows Phone vía Exchange Online








show

nick1967 posted this 13 May 2014













Hi Smita,



 

remember that consulting firms are ultimately about their own bottom line, not yours. A lot of them will not stop you from

going down the wrong road, especially if you make it known that you’d prefer to rebuild, and practically stear them towards years of work instead of months. Sure, they’ll say, that is possible.

 

It really makes no sense to migrate and upgrade applications, instead of just upgrading them. What apps are we talking about

anyway? When upgrading your forest, you’ll likely run into ntlm issues (linux machines that use your AD for authentication), but I can’t think of any application except IDM and exchange that needs upgrading before you can safely upgrade your forest…

 

Greetz,

Michiel

 








show

mi6agent44 posted this 13 May 2014

Off shore? Not so much.

 

show

SamErde posted this 13 May 2014

Being in the midst of a forest consolidation project myself, I would highly advise upgrading the existing one, and if you must, consolidate domains within the forest. Happy to answer any questions that you have about our experience.

show

jigans posted this 13 May 2014


Sent from my iPhone
On May 13, 2014, at 9:32 AM, "Garland, David" <DavidGarland@xxxxxxxxxxxxxxxx> wrote:
















Off shore? Not so much.

 

show

kevinrjames posted this 13 May 2014

You’re probably more likely to break applications by migrating rather than upgrading. Build new Domain Controllers and introduce them gradually, decommission the old ones as you go.

 

Applications generally don’t care what version of DC OS they target, but there are some considerations. It’s significantly safer to gradually work through them than cut them over to an entirely different domain/forest.



 

Don’t abandon your existing AD so easily.

 



/kj



 

show

a-ko posted this 13 May 2014

This depends…
Review the AD to look for inconsistencies in the environment. If replication is broken, DCs and trusts were made and no longer exist; and you see lots of weird event log errors sometimes it’s better to start fresh.
A big recommendation to start fresh is if you're looking to migrate from a “.lcl” or “.local” domain infrastructure to something more recommended, such as “ad.contoso.com”, combined with the above issues.
Especially if you're combining this AD effort with network changes, directory object cleanup, DNS cleanup, etc. Sometimes it’s just nicer to start fresh. But just really depends on the environment…
We started fresh because we needed to implement security controls from the bottom up (DISA STIG) and the previous applications break when those things are implemented. We wanted to identify at very early stages what would work and what wouldn't work. Having the parallel domain effort meant we could slowly migrate individual users and the organization wasn't affected greatly if some of that user’s applications broke.
My 2 cents…
Sent from Windows Mail

show

SmitaCarneiro posted this 13 May 2014

It’s not just applications that I’m concerned about. We’re also trying to take the opportunity to have a fresh start, though from what many of you

are saying it will be Herculean task.

And our domain is a .lcl one so making it a publicly routable domain is another issue that we’re trying to take care of.

 

 

Thanks,

 

Smita

 

show

mi6agent44 posted this 13 May 2014

@Carneiro, Smita A

 

I wholeheartedly agree. No swing upgrade. (see nightmare)

 

Try this:

 

Have MS do a full Rap on your environment. (safety’s sake)

Clean, prune and document your current forest/Domain and infrastructure per MS recommendations.

Patch all DC’s and DNS servers to level.

Upgrade Forest Level  to 2008.x

Swap replace the current domain controllers for 2012R2 server, hardware and budgets permitting.

Create a VIP for ldap.myfancydomain.com with a pool of domain controllers for legacy applications that require static targets.

Install certs on the LB to support LDAPS for application that may require it.

Install Netwrix AD Auditor so you know what is going on in your environment and to meet the impending compliance requirements that we all are being tasked with. (Provide drive space for 7 year retention in SQL)

Infosec will thank you.

Leave the forest functional level at 2008 for backward compatibility as there seem to be to many application unknowns.

Have MS do a full RAP on your environment to proof and post-deployment correct missed issues.

Take a nice vacation after the accolades received.

 

Oh…and for this fine and fancy mini-project plan?

 

I need a place to couch surf for my 50 state bucket list…Indiana?  Never been. Seems nice.

 

J



 

 

show

kevinrjames posted this 13 May 2014

IMO that’s not a reason to abandon it. If you’re concerned about Office 365 or other cloud solutions, then there are plenty of solutions to that problem.

 

If your AD schema is badly broken or you’ve had an AD ‘expert’ recommend dumping the forest, that’s another thing.

 



/kj



 

show

danj posted this 13 May 2014

Forest functional level is unlikely to have any bearing on most of your applications. Dan  

show

mi6agent44 posted this 13 May 2014

That wasn’t Microsoft’s take when we did  a full paid for pre-assessment.



 

There are TechNet articles that support this.



 

In this case we don’t know if the environment has  NT 4.0 or lower level patched 2000 boxes.



 

I would be wary here NetBIOS adventurer.



 

 

 

show

danj posted this 13 May 2014

Perhaps you can share this information. Dan  

show

mi6agent44 posted this 13 May 2014

Another potential snare.

 

KB2774190. This KB relates

to Resource SID Compression in Windows Server 2012 and specifically issues involving user authentication to NAS devices.

 

Mike break access to NAS shares if you are an EMC customer. This setting in 2102 is on by default.

 

2c

 

show

mi6agent44 posted this 13 May 2014

Sadly all MS output is limited by disclosure agreements. (Kerberos)  

 

show

daemonr00t posted this 13 May 2014

Well… most of the large IT companies do the delivery offshore… and so far nothing has burned out

J (well… some do).

 

~d

 

show

SamErde posted this 13 May 2014

Being 30% or so into a forest consolidation project that is moving into a new forest....I have to say that David's high-level plan sounds sublime. 
Using a VIP to load balance or redirect LDAP requests is a new one for me. Is that something that can be done on a Netscaler Access Gateway? Can this kind of thing also be done for DNS? 

show

mi6agent44 posted this 13 May 2014

Ok, although I feel like I’m bragging;

 

60k Users, 28k devices and 300 plus client server application environment. SLA is about one nano-second.

 

We have 40 zero tier applications for healthcare that were written by a wide variety of vendors. To suit this mixed bag of cats

I began a 6 month campaign of sorts to move all these apps to the ldap.mydomain.com VIP. By using a F-5 LB we opened both

LDAP/LDAPS ports and gently asked our admins of their respective applications to move to the LDAP VIP. Once the top tier

applications were moved we established this as a published standard for any net new. To meet DR requirements I then

created a second VIP at another data center with a small subset of DC’s within the same site and domain. Applications within that

that data center pointed to ldap2 to avoid the “don’t span the wan” gotcha. (Old Novell guy…sue me!)

 

Secondary site was made Ldap2.mydomain.com for example.

 

In some cases there were some big applications that required three DC targets…dc1,dc2,dc3 and their rollover was linear.



My thought was primary: ldap.mydomain.com, secondary: ldap2.mydomain.com and tertiary mydomain.com.  Bullet proof anyone?

 

If the main data center goes dark, change DNS for the ldap.mydomain.com to the failover site ldap2 IP and production continues.

Future plans are for a dedicated GEO site failover appliance that will perform the redirect automagically. (technical term

J)



 

Netscaler can do this as well as others…I also suggest enabling “health checks” at the LB for seamless “where’s my good DC?” to prevent session hangs

and Service Desk phone meltdown.

 

We use Bluecat for DNS, IPAM and DHCP. A semi-painful conversion but well worth the reliability and management ease. Trust me.

All public facing DNS and DHCP appliances in this system can run headless for a time during major outage and are much more robust



and secure than a Windows box.



 

Tunnel carpel! I’m out! Impress you directors! Get more sleep! All this can be yours!

 

D

 

show

Cynthia posted this 14 May 2014

David,

You sound like a natural architect and those are hard to come by.

My unsought after 2 cents.

J

Good Luck.

 

Cynthia Erno

Server Applications & Fileshare Administrator

Department of Corrections & Community Supervision (ITS)

(518) 408-5506

 

 

show

daemonr00t posted this 14 May 2014











Keep in mind that you'll get a one year access to the AD RaaS tool so you can use it during and after the upgrade process :)



~danny


Sent from my Windows Phone vía Exchange Online








show

kbatlive posted this 14 May 2014

>> If the main data center goes dark, change DNS for the ldap.mydomain.com to the failover site ldap2 IP and production continues.>>Future plans are for a dedicated GEO site failover appliance that will perform the redirect automagically. (technical term J)  We are already doing this using our load balancer (DNS appliances).  It is site aware and knows the network topology (ok, someone entered that topology).  We have 3 basic “sites” from the LB’s awareness:  datacenter1, datacenter2, everywhere else (hundreds of locations). We delegated a DNS zone (i.e. like LB.mydomain.com) – to the two appliances and name resolution (for dc.lb.mydomain.com) is then sent to the LB’s – who use their rules to return the address of a DC.  Note: this doesn’t work for LDAPS (not yet…I need to create a SAN cert for ‘dc.lb.mydomain.com’ to allow LDAPS to the DC’s behind that DNS name). The general LB targeting rules are:  if caller is in datacenter X, return address in datacenter X IF the target system (i.e. domain controller) is up in datacenter X, otherwise, return target in datacenter Y (it can check different ports before it returns the IP address to determine if a system is “up”). So calls from systems in datacenter #1 return domain controllers in datacenter #1 – IF the domain controllers are “up” (port 389 responding); datacenter #2 does the same for DC’s in datacenter #2.  If the system is remote (from either datacenter) – it basically round-robins between DC’s in the datacenters (also checking if they are “up”). I have 3 DC’s in each datacenter per domain that become the “targets” for returning an address to a domain controller for each domain (and we have two domains so 12 DC’s in total). It also has a “sticky” function so that if a system is directed to a specific DC, it will ‘stay’ with that DC (if that DC is up) – apparently this was an issue for some stateful applications (I forget the specifics as to “why” – having the sticky helped those apps – so they were doing more than just pure 389 LDAP queries). We also use DNS round-robin for an LDAP entry – of course, that is pure DNS round-robin – so if a DC is down, you’ll get some failures – that was implemented (and published) before we had the load balancer and there hasn’t been a push to get people to change their applications to use the load balancer (OK, we’ve been lucky…or maybe good J )    

show

Rajeev Chauhan posted this 14 May 2014

All enterprise are not same AD is same.  It your case being edu privacy laws would come hold more. So whatever you plan your schema well and Keep it Simple.  Most appliance and technology listed are helpful.



 
 


show

mi6agent44 posted this 15 May 2014

I like this…I will run this through the lab and possibly implement.



 

Round robin can be touchy though, especially in Citrix land.

 

Good stuff! Thanks!

 

show

MittlemanR posted this 16 May 2014

One tiny suggestion to add –

 

For our domain upgrade, I created a “PREVIEW-SITE” and put the first of the new DCs there.  That way, while we’re vetting all our servers and apps, nobody fails. 



 

I go into AD Sites and hard-code temporary subnet entries for selected test servers (Add IPv4 subnet nn.nn.nn.nn/32;  description includes server name; and

assign the IPv4 address to PREVIEW-SITE.) 

 

For example, first I pointed one each of every flavor of UNIX & LINUX to the new DCs.  Passed.

 

Then I pointed selected test and development servers – IIS, SQL, SharePOint, Biztalk – to the new DCs. 



 

 

show

rwf4 posted this 22 May 2014

We are doing the same with thing  /32’s to site things for testing in a controlled fashion.



 

One other element we’re employing is blocking the test site DCs from Exchange and then introducing them in a controlled fashion.  

 

To do the exclusion:

Get-ExchangeServer | Set-ExchangeServer -StaticExcludedDomainControllers dc1fqdn,dc2fqdn

 

To check status:

Get-ExchangeServer -status | ft name,staticexcludeddomaincontrollers,staticdomaincontrollers,staticglobalcatalogs

 

To remove the exclusion:

Get-ExchangeServer | Set-ExchangeServer -StaticExcludedDomainControllers:$NULL

 

 

Once stabilized and in the proper sites, we’ll do the same to remove the coverage for the legacy DCs  in the Exchange sites and finish weaning the last reaming

apps from the legacy DCs

 

This has been a great thread. Thanks to all!

 

show

Close