Friday, May 11, 2012

Acrylic DNS Proxy for Restricting Web Access

From time to time situations arise where a client wants to deploy a PC with very limited internet access.  For example, one of our clients uses a web timesheet application and has a PC on the shop floor that hourly employees use to sign in and out on.  How do you make it so that employees cannot also use that machine to visit other sites, waste time, catch malware, etc?

Acrylic DNS Proxy is a good solution for this situation.  Not only does it do the job, it is free, open source software.  And when I emailed a stupid question to the developer, Massimo Fabiano, he responded within hours with a helpful reply.

To add to the "documentation" available on the web for Acrylic DNS Proxy, here are some things I've learned about it:
  • In the configuration file, if you add just one site to the [WhiteExceptionsSection] you activate the "blacklisting" feature of the software, and all sites are blacklisted except those you add to the [WhiteExceptionsSection].
  • Wildcards are said to be accepted in the software's custom Hosts file, but they do not work in the configuration file.  So to whitelist certain web pages, you have to track down all the secondary host names involved in accessing that web page and add those names in the [WhiteExceptionsSection].
    • For example, it is not enough to whitelist mail.google.com to provide access to Gmail.  At a minimum, you also need to add accounts.google.com.  To get all the button images/text, you also need to add ssl.gstatic.com and clients2.google.com.  So, as you can see, it gets to be a fair amount of work if there are more than a few sites that people need to be able to access.
  • In a peer to peer local-area network with shared folders/drives, without a fully-qualified domain name, host names don't seem to work.  I added a FileServer machine to the Hosts file, and tried to map drives to the FileServer.  Unfortunately, looking at the Acrylic logs, Windows (?) appends a random domain to the host name (e.g. FileServer.router4290.local) and Acrylic blacklists the request.  So I map to the IP of the FileServer instead.  Not a big deal if it has a static IP.
  • When you need to download updates and patches to the PC, or otherwise open the machine up to the Internet for a short period of time, it is a simple matter.  Make a copy of the configuration file.  Delete all the entries in the [WhiteExceptionsSection] of the configuration file. Save the edited configuration file. Restart the Acrylic DNS service.  Now the machine can go anywhere on the Internet.  When you are done, replace the configuration file with the copy you made and restart the Acrylic DNS service.

Thursday, May 10, 2012

SBS 2011 Standard Migration

We recently migrated a client from Microsoft Small Business Server 2003 to Small Business Server 2011 Standard on new hardware.

We performed a migration install on the new server machine. We did the whole migration in the course of one weekend to minimize the chances of a bad outcome - data loss, email loss, business disruption, etc.

Users did not have access to the network while we were doing the migration. Email ports on the firewall were closed as well. If the migration went awry, the plan was to restore the source server from the Friday backup, without losing any mail or data.

It turned out that a "fast" migration has its challenges. Moving mailboxes from the source server to the destination server went very smoothly and fairly quickly. We moved 60 gigs of email without losing a single message in about 4 hours.

Robocopy-ing users' shared folders from the source server to the destination was a different story - copying went very slowly. We needed to copy about 300 gigs of data, but it was quickly apparent that the copying wasn't going to finish before Monday morning. 

Network and cpu utilization on both machines was very low.  We tried running several instances of robocopy to increase the utilization of resources and the speed of the job, but no luck.  In pre-migration testing, we did not see this problem.  Robocopy was reasonably quick in our sandbox environment.

We were in a bad spot.  Going forward with our original migration plan and schedule was no longer an option.  Rolling back to the old server was not going to look good with the client, and if we did so, it was not clear what the additional time and costs would be and who would pay them.

Winging it in the middle of a system implementation is not a best practice.  On the other hand, you cannot anticipate every possible problem.  So, we got a little creative while we still had time before we got to the point of no return for rolling back to the old server.

Here's what we did.  Fortunately it worked.  We stopped the robocopy jobs and "finished" the migration.  We  demoted the source server and removed it from the domain.  With Exchange and Active Directory no longer running on the source server, we restarted the robocopy jobs.  Then it went MUCH quicker.  300 gigs copied in about 6 hours.

Come Monday morning, the new SBS 2011 was live and in charge.