Skip navigation

Preface: In an earlier post, I talked about using Roaming Profiles for backing up the users ‘stuff’. Well, the cons and flukes of roaming are becoming too apparent, and I’m now taking steps to stop roaming.

Windows 7 has a pretty sweet backup function, and when combined with Windows Server 2008 R2, backing up to the servers becomes freakin awesome. One problem I ran into while testing this was trying to get the backups to run at night while the client machines were ‘sleeping’. It is certainly a possibility, but you must make sure two things are checked:

In the Windows 7 Power Management, go to the power plans advanced settings, and ensure that Sleep > Allow Wake Timers is checked.

Then, follow these instructions to modify the actual backup timer:

After you’ve configured your backup schedule, open the Task Scheduler by clicking:
Start > All Programs > Accessories > System Tools > Task Scheduler

Within Task Scheduler, click to:
Task Scheduler > Task Scheduler Library > Microsoft > Windows > WindowsBackup

In the top-center window pane you should see a task titled AutomaticBackup that corresponds to the backup schedule item you created. Right-click AutomaticBackup and select the Properties menu item.

image

In the Properties Window, click the Conditions tab and then check the box titled Wake the computer to run this task.

The excerpt above was posted in the Mike Knowles blog, Jan 2010.

The location of all the password restrictions was not obvious to me, so I’m logging it here:

[group policy] > Computer Configuration > Policies > Windows Settings > Security Settings > Account Policies > Password Policy

There you will find all of the password rules for the domain.

I entered the wrong information in a user’s Outlook setup. Now every time we tried to run Outlook, with the purpose of changing the information, we couldn’t get past the error message of it trying to connect to a non-existent exchange server.

Here is what you don’t do: uninstall outlook and try and find any user options in the registry about the previous profile. Its retarded. After I thought I got everything out of the system for that user, I reinstalled Outlook, started it, and it was still trying to use the previous incorrect setup.

Solution:

Control Panel > Mail

Its so easy, and I didn’t know it existed because I’m somewhat anti-email-clients. Inside the Mail applet, you can just make another profile, and then set that as the default profile for Outlook. You can even do all the setting there too. Boom, done.

Some keywords that I used trying to find this solution: reset outlook, reset outlook settings, zero out outlook settings, remove outlook profile, and can’t start outlook.

A 3rd party hosting company is the authority of our domain name (we host a lot there), however I wanted to run a web server internally as well. Our Active Directory domain is also part of the registered domain that is controlled by the 3rd party host.

ourdomain.com = the website hosted elsewhere

tools.ourdomain.com = the webserver I wanted to be accessible internally and externally, with the same addresses

Solution:

External: Add a DNS entry to the domain controlled by the hosting provider. It will be a CNAME to the effect of record = tools, type = CNAME, value = your external / WAN ip, or dynamic DNS address. On your router, forward http / https request to the IP address of the internal web server.

Internal: On your Windows Server, go to the DNS Server snap-in and browse to the Forward Lookup Zones. Right-click on Forward Lookup Zones and click New Zone… . Follow the prompts, probably choosing all of the defaults, and when it asks you for the domain name, type in the full address (in this example, tools.ourdomain.com). Click into the new domain, and in the left right-pane, right-click and click New Host (A or AAAA)… . Leave the Name blank, and for the IP Address, put in the internal web server’s IP.

It seems that the location of controlling users’ screensaver options (like requiring passwords to get back in) has changed a little in Windows Server 2008 R2. Edit the  Group Policy in which you’re interested, and browse to:

User Configuration | Policies | Administrative Template | Control Panel | Presonalization

There you will find all the options dealing with forcing themes and screen savors. Today, I was particularly interested in:

  • Enable screen savor
  • Prevent changing screen saver
  • Password protect the screen saver
  • Screen saver timeout
  • Force specific screen saver

If for whatever reason you need to delete a user from a windows machine, and then recreate that same user (because maybe you destroyed their registry), removing them as a user and deleting their users/ directory may not be enough. If you try to log back in as the user you thought you deleted, expecting a fresh new profile, you may get a notice saying that the user’s profile could not be found, and a temporary one is in use.

Solution: (confirmed to work on Windows XP and Windows 7 boxes)

Logged in as an admin, pull open regedit.exe and browse to:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList

Expand the ProfileList in the left window pane and start clicking on those oddly named folders until you find one that has contents relevant to the user you’re trying to delete. Delete that folder. Now Windows will not remember that the user once existed, and will recreate the users/ folder for that user, and you’ll have your fresh start.

The pros and cons to roaming profiles are painful.

Awesome pro example: Your users have domain profiles, and they’re set to roam. The user’s laptop dies, and they need to be back online that day. You run out to the store, buy a $399 laptop (the user is a document processor), and attach it to the domain. When the user logs in with their domain credentials, their Desktop and Documents download to the machine. Besides installing software packages, they’re ready to go out-of-the-box.

Why is this freakin awesome?

  • Backups: The user doesn’t need to consciously think about backups. As long as they use Windows like Windows forces the to (everything in user/documents, desktop, etc), their profile gets synced with the server every time they log off.
  • Computer  Migration: As described in the example above, migrating a user to another workstation or laptop (or between workstation and laptops) is as easy as them just logging in.

Unfortunately our jobs can’t be that easy. What are the nasty cons?

  • Some users don’t properly manage their files, and their profile can get huge. The admin will have to monitor profile sizes to make sure the servers are not overrun.
  • What if a user casually logs onto someone else’s workstation? Their entire profile is going to get downloaded to that workstation. (If there is a way to specify which workstations in the domain get the roaming profile, that would be AMAZING)
  • If the user is using a laptop, they need to be warned that when they log onto the laptop with their domain credentials there will be a lag, and they will be presented with a warning saying they are using a locally cached profile, (which is fine, any changes they make will be synced with the servers next time they log off on the domain’s network).

An admin has to seriously weigh these pros and cons and decide what’s best based on the users’ setup and needs. I decided to use roaming profiles for the majority of my users (except for the power users that managed their own backups, and the mac users).

Tip: Edit the group policy for the users that have roaming profiles so that ‘expendable’ folders are excluded. Folders like “Pictures” or “Music” might be deemed unworthy of company backups.

From Microsoft:

  1. Click the Group Policy tab, click the GPO that you want to work with, and then clickEdit.
  2. Under User Configuration, expand Administrative Templates, expand System, and then click User Profiles.
  3. In the Setting list, double-click Exclude directories in roaming profile, and then click Enabled.
  4. In the Prevent the following directories from roaming with the profile box, type the appropriate folder names, separated by semicolons (;).

See the full Microsoft article.

Lets say you have a DFS Namespace folder targeting an actual share, this is in place for a few days, and a bunch of Windows 7 clients have been accessing it. Now you decide to have that namespace folder house a bunch of other namespace folders that target their own shares.

DFS Namespace is made for this idea: You have shares A, B, C, and they’re all ‘awesome’. Go to your DFS Namespace snap-in, and inside your DFS root (the name of your namespace), ‘Add Folder…’, like ‘awesome’, but don’t set any targets. Now go into ‘awesome’ and ‘Add Folder…’ again, this time setting the target to ‘A’, and so one for B and C. Now users can browse to //mydomain/mynamespace/awesome/ and see the A, B, and C folders, even though the folder ‘awesome’ doesn’t exist in the file system.

I made the change and tested on my Windows XP client — works flawlessly, as expected (I was working on an older laptop, hadn’t made the Win 7 change yet). Just to make sure, I walked to another office, logged into a Windows 7 machine, browsed to the namespace folder, and got an error that the network path could not be found.

WTF? This DFS Namespace concept works on WinXP clients and now Win7 clients? No, because a test on other namespace container folders named ‘asdf’ worked fine. The problem was that Win7 was caching the old setup where the namespace folder targeted a share, since the new setup used the same folder name for the namespace folder that was housing other namespace folders.

Solution:

If I knew how to tell Windows 7 to ‘clear your DFS Namespace cache’, I would have done that (would I have had to do that to all of the client machines?). I just changed the parent folder’s name slightly, and the Windows 7 boxes instantly caught on and browsed through that namespace folder fine.

I guess I over estimated DFS replication. I was migrating a huge production share from a single drive on a PC over to the two servers. In the new setup, each server had a 1 TB Western Digital Black hard drive dedicated to this folder of production data, and DFS Replication would be used to keep it duplicated and synced between the two servers.

Note: Instead of a NAS or just a mirrored array in one server, I wanted full hardware redundancy, and I wanted it abstracted through the DFS namespace so the users didn’t have to worry about it. This way, one of the servers could catch fire and melt into a puddle on the floor, and the users could still browse to the network share and access files. A single NAS or mirrored array is still one entity.

I waited for everyone to leave, and I did a straight copy from the old server to the new ‘server array’. This folder housed about 750 GB spread over more than 300,000 files. Everything seemed OK during the transfer, but when it was done, I had a huge mess. File services were throwing errors and warnings, LAN bandwidth was suffering, and due to my retardedness,  the nightly backups initiated, taking all of the still-trying-to-be-replicated files with it. ( The nightly backup is managed by Windows Server snap-in, and it goes to an external ioSafe )

System performance was abysmal, and I went to rebooting the two servers ( I can’t remember if I told them to stop replicating that folder at this point or not ). Unknown to me, both of their network adapters, upon reboot, were hosed so communication between the two DCs was slow, users coming in early could barely log into the domain, and the DNS servers where AWOL. All of this seemingly because I flooded a replication folder without any plan of attack.

Solution:

Before anyting, I disabled / enabled the network adapters, and that fixed whatever happened to them. ( I know, wtf? Both servers had this same problem )

I scrapped the replication group to this production folder, I scrapped the physical files on the secondary server that were already replicated, I scrapped the namespace targets to the folder, and started over. ( During this time, I redirected the users back to the old “file server” )

New plan of attack:

  1. Make separate replication groups for each ‘project’ in the large folder. This will result in about 20 groups.
  2. Make a namespace folder that will house the 20 replication groups.
  3. Publish each replication group into the aforementioned namespace folder.

Server performance went back to happy levels. Each group began to replicate, and after 30 minutes or so, the secondary server had completed replication of all the groups.

This solution comes with some benefits:

  • Now that each ‘project’ (around 50 GB each) is its own replication group, and the target of its own namespace location, I can spread the ‘projects’ around the server across multiple hard drives.
  • File Services warning now have a little more meaning because the point to specific groups, and not just some 750 GB replication group. I can adjust staging files and such appropriately.

Based on the needs of a small company, I decided a Windows server environment was going to be the easiest to manage. This blog will act as a notebook of the problems and solutions that I encounter.

The company wanted a directory service for controlled security, and a better storage / back-up solution for all of their production data. The plan was to have a couple servers running Active Directory, DFS namespaces and replication, DNS, and VMWare Server. I chose the latest Microsoft server, Windows Server 2008 R2 ( released late 2009 ).