Category Archives: Technology

Backing Up or Migrating PuTTY Settings

WARNING: This post involves playing around with your operating system’s registry. You use this information at your own risk. For other warnings, please see the disclaimer.


PuTTY is one of the go-to apps that pretty much every network engineer has in his or her toolbox.  Anyone who uses it frequently tends to customize its settings to make it more personally appealing.

 Of course, it’s also one of the apps that I always – and I do mean always – forget about when I reload an OS, buy a new machine or do anything else that involves changing my user profile.
Fortunately, PuTTY is a rather simple app and backing up and restoring its settings is pretty easy.  It’s just a matter of backing up and restoring a registry key.
Click on “Start”, type “regedit” and hit <ENTER>:
registry editor open.png

Browse to HKEY_LOCAL_MACHINE>SOFTWARE and select the “SimonTatham” key:


Right-click on the “SimonTatham” key and select Export:


HKLM.SW.ST.Right-Click Export.png


Browse to your preferred location, type in a name and click “Save”:



Once you’ve done this, all of the settings for PuTTY are saved in the file which was just exported.  To restore these settings (or apply them for the first time to a new OS install, new profile, new computer, etc.), install PuTTY and then double-click on the file.

When you do that, you’ll get a UAC prompt.  If you accept that, you’ll get this warning:

reg save warning.png

Click on “Yes” and those registry values will be written into your registry.

I hope you found this helpful!

DFS Folder Redirection Woes… and a Fix!

I’m a fan of folder redirection, however it does have a couple of “Gotchas!” you have to look out for.  For example, if you redirect a user’s AppData folder to a DFS namespace, shortcuts on the taskbar are no longer trusted.  Here’s how to fix that.

If I encounter a network with multiple sites, especially if one of those sites is our datacenter, I generally implement DFS and replicate data between the sites.  I’ve been a big supporter of using DFS since way back in the Windows NT 4.0 Option Pack days.

However, DFS does create some unique challenges from time to time and I recently ran into one of those when I enabled Folder Redirection on our internal network and pointed the redirected folders to a DFS path.

When one redirects the AppData folder, the Taskbar shortcuts are relocated as part of that.  Once that happens, the following error occurs:

Error - can't verify.png

The shortcut still works; it’s just that since it’s on the network (which isn’t, by default, trusted), one must confirm the shortcut is safe by clicking on “Open”.


Searching around the Interwebs, I learned this setting is controlled via Internet Explorer’s security zones.  Kind of stupid, but whatever.

The common solution to this problem is:

On the Security tab in Internet Explorer options, disable Protected Mode and then select the Local Intranet zone:

Int Opt - Local Int - PM Off - with markup.png
Click “Sites” and make sure all of the checkboxes are selected:

Local Intranet Advanced.png

This is where things started to get a little frustrating.  This seems to be all one has to do in order for servers on the network to be trusted, but it only works if the UNC path is a traditional \\server\share format.  It doesn’t work if you point to a DFS root because no server name is indicated in the UNC path.

As it turns out, if one is using DFS for Folder Redirection, those settings may not have to be enabled once the Local Intranet zone is sufficiently tweaked.

In the current window, select “Advanced” and add the UNC for the DFS root with a wildcard (*) as the server name:

Local Intranet Advance - Add site.png

The wildcard indicates that any host which is part of the domain will be trusted.  This includes not indicating a host at all, which is why DFS made this whole process so wonky.  This is also why the checkboxes in the previous window probably don’t matter… by including the wildcard, all servers in the domain are trusted.  If, however, there are servers on the local network that aren’t part of the domain, I’ll still need to worry about those checkboxes… along with the normal authentication issues that need to be addressed.

Make sure the server verification setting is NOT selected and click “Add”:

Local Intranet Advance - Site Added.png

You’ll see the UNC path added to the list of trusted Websites (which, apparently, means “Websites and everything else”.

Keep in mind, we’re dealing with a web browser here, so everything is done from that perspective.  If you don’t indicate this is a UNC path by including the double backslashes, IE will assume it’s a website:

No slashes.png

Because of this, the only way data on the local network would be trusted is if one accessed it via a web browser.  By adding the DFS root as a UNC path, access to the data is available across the network via shortcuts (and other methods of access).  To see this, hit “Close” after the DFS Root UNC path was added and then click “Advanced” again:

Local Intranet Advance - Site Added - auto changed.png

The UNC path has been changed into a standard URL with the “file” scheme.  If you have websites on your intranet which are accessed via web browsers, go ahead and add the path again without the backslashes for good measure.  I don’t know if that’s necessary – if you find yourself in that situation, please test it out and let me know.

Submit all of the changes and get out of Internet Explorer.  Test the shortcuts, and they should work fine.

I hope this helps out.  As always, your feedback is greatly appreciated!

Folder Redirection through Group Policy

One thing I’ve always found frustrating is no matter how many times one asks the end users to not save things on their local machines, they do it anyway.  Forget that we don’t back up the desktops – only the servers.  Well, let’s sneak their data onto the servers without them knowing about it.


The basic idea of Folder Redirection is configuring Windows clients to store the contents of certain folders on the network instead of the the local machine.  To do this, you’ll need a shared folder on a server.

Generally speaking, I always give Full Control to Domain Admins and then whatever permissions I feel users need.  In the case of redirecting folders, users will need Change permissions.

But here’s a gotcha!  Make sure the NETWORK account has Full Control permissions on the share.

Share Permissions

Once that’s done, you’ve got your destination.

Next, create the Group Policy Object and edit the settings at User Configuration > Policies > Windows Settings > Folder Redirection:

GP Mgr Folder.png


To do this, right-click on whichever folder you wish to redirect and select “Properties”:

Folder Properties.png

There are a few options available, but I’m going to just choose basic redirection where everyone’s stuff goes to the same place:

Folder Settings Window.png

Point the root path to the shared folder you created.  Go to the Settings tab and (if you wish) have the existing contents of the user’s local folder copied to the network location:

Desktop Properties Settings Tab.png

As you redirect more folders to the same location, Windows will build the folder tree for each user automagically:

Redirected Folder Tree.PNG

Fair warning:  The first time a user logs in, it’s going to take a while if you’ve redirected a lot of folders and they have a lot of stuff in them.  Especially with AppData, Music and Pictures.

Give this a shot and let me know what you think!



How to Reset a Lost SA Password in Microsoft SQL Server

This article explains how to reset the password of the sa account on a Microsoft SQL Server.  The steps in this article work in SQL 2005, 2008, 2008 R2, 2012, 2014 and 2016.
It’s happened to all of us.  Someone installs a SQL server and then promptly forgets the sa password without documenting it.  Normally, it’s not that big of a deal – just log in with a different account, right?  Oh, wait… there aren’t any.

Recently, I ran into a scenario where a corrupt Active Directory had been rebuilt, however the SQL server logins got lost in the shuffle.  Consequently, there was no way to log into the SQL server except for the sa account, and that password was long-forgotten.  Fortunately, there’s a way to fix this problem without reinstalling SQL and reattaching the databases.

Microsoft SQL Server has the ability to launch in Single-User Mode.  In this mode, any account that is a member of the local Administrators group will be able to log in to the server with sysadmin privileges.  To launch in Single-User Mode, one must use a startup parameter for the SQL instance in question.

While in Single User Mode, only one user can be connected at a time (as the name would imply).  You’ll connect and interact with the SQL instance from a command prompt using SQLCMD commands.

1. Open an elevated Command Prompt.

Admin-Cmd-Prompt.png2. Stop the SQL Instance.  The default is MSSQLSERVER.


Open in new window

net-stop-mssqlserver.png3. Start the SQL Instance using the ‘/m’ switch and specifying you’ll use SQLCMD to interact with the instance.  The input following the ‘/m’ switch is case-sensitive.  There’s no indication you’re connected in Single-User Mode, so don’t worry if you don’t see anything.


Open in new window


4. Connect to the instance with SQLCMD.  Just type ‘sqlcmd’ and hit <ENTER>.  You’ll find yourself at a numbered prompt.  This means you’re connected to the default instance.  If you want to specify a particular SQL instance, just use the appropriate SQLCMD switches.  The syntax will be:

sqlcmd -SServerName\InstanceName

Open in new window


5. From here, you use Transact-SQL (T-SQL) commands to create a login.  I’m going to create a login called “RecoveryAcct” and give it the password “TempPass!”.  Since you’re issuing T-SQL commands directly, you’ll need to use the ‘GO’ command, too.  There’s no indication the command was successful; you just end up back at a ‘1>’ prompt.  If you don’t get an error, you can assume all is well.


Open in new window


6.  Now, use more T-SQL commands to add the user to the SysAdmin role.  Again, you’ll need to use the ‘GO’ command and if you don’t get an error, you can assume all is well.


Open in new window

sqlcmd-add-role.png7. To exit SQLCMD, type ‘exit’ and hit <ENTER>.  Next, stop the SQL instance and then start it again without the ‘/m’ switch so it is no longer in Single-User Mode.

net stop MSSQLSERVER && net start MSSQLSERVER

Open in new window

net-stop-and-net-start-mssqlserver.png8. Launch SQL Management Studio using SQL Authentication and log on as the user you just created.

sqlmgmtstud-logon-with-recovery.png9.  Now, you can look at your security settings and make the appropriate changes you need.  You’ll see your recovery account listed.

You can also use the SQL Server Management Studio if you’re not comfortable with SQLCMD.  When you are starting the instance in Single-User Mode, you can always specify “Microsoft SQL Server Management Studio” instead of “SQLCMD” after the ‘/m’ switch in step 3.

net start MSSQLSERVER /m"Microsoft SQL Server Management Studio"

Open in new window

I hope this helps you out.




An Overview of DHCP

Configuring network clients can be a chore, especially if there are a large number of them or a lot of itinerant users.  DHCP dynamically manages this process, much to the relief of users and administrators alike!

Dynamic Host Configuration Protocol (DHCP) is standard protocol used by most networks to assign IP information to clients so administrators don’t have to worry about assigning static IP addresses to each device.

For example, let’s say you are the administrator of a corporate network and you have been tasked with creating a wireless network for guests so they can access the internet when they are at your company’s location.

Imagine if you had to assign every address manually.  Each guest user would have to physically hand you their phone, tablet, laptop, etc. and you would have to then log on to their device and configure the wireless network settings manually, typing in the IP address, subnet mask, default gateway, DNS server settings and whatever else they needed.

Since IP addresses must be unique on the network, you’d have to make sure you had a spreadsheet or notepad handy to record which IP addresses you had assigned so you didn’t accidentally assign the same IP address more than once.  Of course, this also means the guest users have to come back and let you remove the settings you configured so you could then note on your documentation that the IP address is available for you to assign to someone else.

Obviously, this would be a nightmare for everyone involved.

If you were to configure DHCP for that wireless network, all of this would be handled for you dynamically and your time can then be better spent handling another virus outbreak then explaining to users, yet again, that opening attachments in emails from emissaries of African royalty looking for help with financial transactions is still a bad idea.

So, how does all the magic happen?

In order for this to work, there are two DHCP components which must be able to communicate with each other:  the DHCP Server and the DHCP Client.  The DHCP client is software whose job is to simply ask a DHCP server for an IP address.  The DHCP server is quite a bit more complex.

DHCP Scopes

dhcp-server-scope.pngThe DHCP Server is what manages the entire process.  Its job is to listen for client requests and then give them the information they need to communicate with the network.

This information is stored in the DHCP Scope.  The information the scope contains will be used by the clients to configure their network settings.  Because of this, the scope must define, at minimum:

  • A range of IP addresses which can be assigned to clients
  • The subnet mask for the network
  • The default gateway address to be used by the clients

The default gateway is actually optional, however without it, clients won’t be able to connect to any other network, including those on the internet.

Other information typically defined in the scope include:

  • The DNS servers the clients will use
  • The default DNS domain name for the clients
  • Time servers the clients will use for time synchronization

There are a ton of other options, but the ones I’ve listed are the most common.

DHCP Leases

Remembering that IP addresses assigned on the network must be unique, the DHCP server has to keep track of what IP addresses are currently in use so it doesn’t hand out duplicates.  It also has to make sure devices don’t keep their IP addresses indefinitely, otherwise the server could eventually run out of addresses to assign.

Because of this, clients are not given and IP address to own.  Instead, they are given a lease on an IP address.  The Lease Duration is the amount of time the client can use that IP address.  Clients will attempt to renew their lease before it expires, and the server will always renew it (with a few exceptions, of course).

DHCP Reservations

dhcp-server-reservations.pngThere are times when it might be necessary to make sure certain devices always use the same IP address no matter what, but you still want that device to be configured via DHCP.  To do this, simply create a DHCP Reservation.  As the name suggests, a DHCP reservation reserves a specific IP address for a specific client (identified by its MAC address).  A great example is a network printer.  You can create the reservation in the DHCP scope and when you connect the printer to the network, it automatically gets configured with the correct information.

DHCP Exclusions

dhcp-server-exclusions.pngOne other option worth mentioning are DHCP Exclusions.  DHCP scopes can be configured to exclude IP addresses from being assigned to clients even though those addresses are part of the range of addresses the DHCP hands out.  This way, if you have devices that must have a static IP address, but can’t be configured via DHCP (or you just don’t want to configure reservations), you can have that device on the network without worrying about its IP address being given to something else.

DHCP takes most of the work and worry out of managing client connectivity.   Not only does it automatically configures clients with a host of administrator-defined options, it also documents the IP information for each client.

Put it all together, and DHCP is a powerful tool which should be every network administrator’s toolbox.

I hope you’ve found this article informative.  If so, please click on the blue and white “Good Article” button below.



A Simple Explanation of Group Policy Inheritance in Active Directory

WARNING:  This post involves playing around with Active Directory, so don’t do this in a production environment.  You use this information at your own risk.  For other warnings, please see the disclaimer.

Group Policy is an incredibly powerful feature in Active Directory that allows one to implement specific configurations for users and computers. By creating Group Policy objects (GPOs), administrators can apply thousands of different settings to objects within Active Directory by linking the GPO to sites, domains, or organizational units (OUs).

Unfortunately, Group Policy’s flexibility can also increase its complexity.  It’s one thing to specify a single setting, such as a password complexity rule, to the entire domain.  It’s an entirely different thing to specify unique configurations for thousands of users or computers spread across different geographic areas.  One area where there can be confusion is in determining which settings are applied to a particular user or computer when multiple policies exist.

Inheritance in Group Policy works very similarly to inheritance when it comes to NTFS permissions.  The basic rule is “settings on parent objects are inherited by child objects”.

For example, let’s say you have an Organizational Unit (OU) hierarchy as follows:

AD-1.PNGEvery Active Directory domain has a “Default Domain Policy” which is a Group Policy Object (GPO) which contains the default settings for the domain.  That GPO is linked to the domain:


Because it is linked to the domain, every OU under the domain inherits the settings of the Default Domain Policy GPO.

Let’s say the Default Domain Policy configures users to get a green desktop background.  Regardless of where your user account is in the domain, you end up with a green desktop because the settings in the Default Domain Policy are inherited by all child objects (everything in the domain).

ddp-green.pngNobody has to enforce this; it’s just how Group Policy works.

Now, let’s say that you need to create some settings for your sales department.  So you create a GPO called “Sales Stuff” and you link it to the Sales OU:

ssgpo.pngOnce you do that, the settings in Sales Stuff is applied to everything in the Sales OU, including Managers, Sales Reps and Sales Admin and everything they contain.  Again, this is just how Group Policy works.

When multiple GPOs are applied, they are applied from the top down.  So, the first GPO applied is the Default Domain Policy and the second is the Sales Stuff.  (It’s not quite like that, but close enough for this discussion).

As each policy is applied, it will overwrite conflicting settings that previous policies applied.  In our example, the Default Domain Policy GPO changes the desktop color to green.  But, let’s say the Sales Stuff policy has the desktop color set to yellow.

Well, the first policy applied when you logon is the topmost policy.  That’s the Default Domain Policy.  So, it changes the setting on your computer to make the desktop background green.  However, the Sales Stuff policy is applied next and it changes the setting to make the desktop background yellow.

The end result is your desktop is yellow.

Keep in mind, this only applies for configured settings which conflict with each other.  In this case, the desktop color.  But, if the Default Domain Policy also dictated what kind of mouse pointer you had, and that wasn’t specified in the Sales Stuff policy, the Default Domain Policy settings would be there, and because they won’t get overwritten by the Sales Stuff GPO, they would apply.

Well, the CEO will have none of that!  By God, those desktops are going to be green, or some heads are going to roll!

No problem.  In your Group Policy Management console, right-click on the Default Domain Policy and select “Enforced”.

Now, the Sales Stuff policy cannot overwrite the Default Domain Policy settings (and neither can any other GPO).  So, when you log on, any setting the Sales Stuff policy would have overwritten, including the desktop color, are kept intact.

ssp-green.pngSo, regardless of the Sales Stuff settings, your desktop is green.

This is a very simplified explanation, but I hope it might clear up some fog on how this works.


Creating a (VERY) Basic Router for a Hyper-V Private Network – Part Three: Configuring Ubuntu as a Router


In Part One, I created the virtual switches to create a lab network that looks kind of like this:

Goal Virtual Network.png


In Part Two, I installed Ubuntu Linux on a virtual machine.

To finish the project, I need to do a few things:

  • Add a second network interface to the VM
  • Add a route on the firewall
  • Configure networking
  • Enable routing
  • Update the OS
  • Optimize it for virtualization

Adding a Second Network Interface to the VM

During the creation of the VM, I assigned the network adapter to the private virtual switch.  Now, I need to add the external virtual switch so it can route between the two.

Open Hyper-V Manager, select the Virtual Router VM and select “Settings” in the Action Pane:

Hyper-V Mgr.png

In the left pane, under “Hardware”, select “Add Hardware”.  In the right pane, select “Network Adapter” and then lick “Add”:

Add Network Adapter.png

Select the external virtual switch and click “OK”:

Adapter Settings.png

At this point, I can start the VM and perform some initial updates and optimization, but first we need to configure network connectivity.

Add Route on the Firewall

The logical network looks like this:


The firewall needs to know how to get traffic back to the private virtual network.  So, I entered the following route into the firewall:

route inside

(It might be different for your firewall. YMMV.)

Configuring Networking on Ubuntu Linux

When you first start the VM after adding the second adapter, you’ll probably get a window asking to revert to the previous checkpoint.  Just continue and don’t revert.

Log into the VM using the username and password configured during installation:

Logon to VM.png

The first command I’ll run is simply used to see what network interfaces are recognized.  This can sometimes be a a little flaky in Ubuntu.  The command for this is “ifconfig -a”.  The “a” switch shows all interfaces, regardless of whether any are in an up or down state.

Run “ifconfig -a” on the VM:

ifconfig -a.png

All three interfaces are there, which is a relief.  The loopback interface might be a surprise to you, but this interface exists by default and is, obviously, assigned the loopback address ‘’.

The first interface was created and configured during installation and is named ‘eth0’.  It would be nice to see if it actually works, so I’m going to test connectivity by pinging another VM on the same virtual switch.  The server’s IP is

Ping another VM on the virtual switch:

ping VM test.png

You’ll see where the first attempt failed; I had to disable the Windows firewall on the other VM.  Once I did that, I was able to ping both ways.

The second interface is the one we just added via Hyper-V Manager and its name is ‘eth1’.  It has no configuration, so that needs to happen now.  Again, the logical network will look like this:


The network is the private virtual switch (obviously, I hope) and the network is the external virtual switch.  The interface ‘eth0’ is configured correctly for the the private network and I’ll configure the external network as follows:

IP Address:
Subnet Mask:

To do this, I’ll edit /etc/network/interfaces using the nano text editor.  Since it’s a system file, I’ll need to run this with elevated privileges using ‘sudo’ and enter my password.

Issue the command ‘sudo nano /etc/network/interfaces’ and edit the text file:

nano interfaces.png

I modified the comments to make them more meaningful to me.  I also added the section for interface ‘eth1’.  Next, I need to bring the interface up and then restart the network.

Issue the commands ‘sudo ifconfig eth1 up’ and ‘sudo /etc/init.d/networking restart’:

ifconfig and restart.png

Test connectivity by pinging the firewall on the external network:

test external.png

I love it when things actually work.  But, can I ping all the way to the internet?

test external.png

Yep.  ‘Woot’, and all that.  Now, I can update the OS and configure routing.

Configuring Routing on Ubuntu Linux

This part is very easy.  One command and a reboot.

Edit /etc/sysctl.conf and uncomment the line ‘#net.ipv4.ip_forward=1’ by removing the ‘#’.  Then save the file, and reboot:

edit sysctl.png


After the reboot, I tested routing by attempting to ping an internet address from the server on the private virtual switch:

ping from server.png


Updating Ubuntu Linux

One thing I forgot to do was configure DNS on either of the interfaces.  I’ll do that now by editing /etc/network/interfaces.

Edit /etc/network/interfaces and then save the file, and restart networking.  Then test using ‘dig’ and a well-known site:

restart after nameservers.png

test DNS.png

Now, I can update the OS.

The first command used is ‘apt-get update’.  This doesn’t actually apply updates.  Rather, it’s used to update the local list of packages and dependencies from the repositories.  You’ll have to do this before you actually apply updates.

Issue ‘sudo apt-get update’ command (make sure you use sudo… you’ll see errors at the top where I forgot):

apt-get update.png

Next, I’ll install the system patches and upgrades with ‘apt-get dist-upgrade’.

Issue ‘sudo apt-get dist-upgrade’ command (again, I forgot to use sudo… apparently, because I’m stupid):

apt-get dist-upgrade.png

A list of new packages and package upgrades are shown.  You can accept or reject them.  This is one of the things people rave about with Linux… you have all this control.  Blah, blah, blah.  I’ll accept the changes and let it do the upgrades.

apt-get dist-upgrade confirm.png

This could take a while.  After that, I’ll install package patches and upgrades using ‘apt-get upgrade’.

Issue ‘sudo apt-get upgrade’ command (Hey!  I remembered to sudo!):

apt-get upgrade.png

Looks like everything is up-to-date, so on to installing some virtualization packages.

Optimizing Ubuntu for Virtualization

For information regarding what virtualization tools are supported in Hyper-V, Microsoft has published some good info.

According to Microsoft, we want to perform the following operations:

  1. Disable Network Manager
    This isn’t running, so no worries here.
  2. Install the virtual HWE kernel
    Issue the command ‘sudo apt-get install linux-virtual-lts-xenial’

    apt-get install linux-virtual-lts-xenial.png

  3. Install the Hyper-V daemons for VSS Snapshot, KVP and fcopy.
    Issue the command ‘apt-get install linux-tools-virtual-lts-xenial linux-cloud-tools-virtual-lts-xenial’.

    apt-get install more daemons.png

That’s it!  All done and ready for lab work.  Hope you find this useful!


Creating a Virtual Machine in Hyper-V

Virtualization is incredibly useful.  Among other things, the ability to create virtual machines allows one to consolidate hardware, create more resilient networks and play around in lab environments without investing in expensive hardware.  I’ll show you how to create a basic virtual machine.

I’m using Windows 10 Professional, but the steps are pretty much the same in all versions of Windows that have Hyper-V.

First, you need to make sure Hyper-V is installed on your computer.  Once that’s done, launch your Hyper-V manager.

Creating a Virtual Machine

In Hyper-V Manager, select “New” under the “Actions” pane in the right side of the Hyper-V Manager window:

Select New VM.png

When the wizard launches, just click next:

Before you begin.png

I’m creating a virtual router for a test environment, so I’ll be installing Ubuntu Linux.  It doesn’t require a lot of resources, which is nice.  You’ll want to make sure you set up a virtual machine that has the settings you need.

For the “Specify Name and Location” window, give the VM a name and then decide where you want to store it.  Then, click “Next”:

Name and Location.png

You’ll need to choose what generation machine you’ll create.  If you’re migrating a VM from a previous version of Hyper-V, if you’re installing a 32-bit OS or you’re creating a non-Windows VM, you’ll want to create a Generation 1 machine.  Otherwise, go with Generation 2.  I’m installing Linux, so Gen 1 it is!

In the “Specify Generation” window, select the appropriate generation and click “Next”:


When it comes to memory and drive space, make sure you configure settings that are adequate for the intended purpose of the virtual machine.  For my little router, I don’t need much, so 2GB is fine.  Regardless, I recommend using dynamic memory… it keeps things efficient by only allocating physical memory when necessary.

In the “Assign Memory” window, enter the appropriate amount of memory and ensure the “Use Dynamic Memory for this virtual machine” box is checked and then click “Next”:

Assign Memory.png

When it comes to networking, you need to select a virtual switch to use for connectivity.  Or, you can simply decide to not use anything.  I need connectivity, so I’m selecting a virtual switch.

In the “Configure Networking” window, select the desired connection and then click “Next”:

Config Net.png

In the “Connect Virtual Hard Disk” window, give your VHD a name, browse to where you want it stored, and give it a size appropriate to its use.  Once you’re satisfied, click “Next”:

Connect Vir Dsk.png

You can choose several ways to install your OS.  It doesn’t really matter at this point; you can always mount media later.  I’ve got the ISO downloaded and ready to go, so I’m going to go ahead and mount it now.   This doesn’t actually install the OS; it simply mounts the media and the next time you launch the VM, it will boot to that media.

In the “Installation Options” window, select the appropriate option and configure it.  Once you’re done, click “Next”:

Install Opt.png

By the way, you’re not hallucinating.  There actually is an option to install from floppy disk.

In the “Completing the New Virtual Machine Wizard” window, click “Finish”:



That’s it!  Now you can connect to the VM, install the OS and have just a whale of a time.

Hope you find this useful!


Installing Hyper-V on Windows Server 2016


Virtualization has changed the world of computers and networks in some pretty drastic ways.  If you’ve never played around with it, I’d recommend you do so; it’s one of the more useful technologies out there.

First thing you’ll need to do is make sure your computer is configured to run Hyper-V.  This is a setting you’ll find in your BIOS.  Where it’s located varies from computer to computer, but it will live somewhere in the CPU settings and will look something like this:


One common mistake that trips people up is rebooting the computer after the BIOS settings are changed.  You actually need to power down the computer and then turn it back on.

Installing Hyper-V

Once you’re back into the OS (I’m using Windows Server 2016 Standard), go into Server Manager, select the “Manage” menu and then “Add Roles and Features”:

Server Mgr Add Features.png

When the wizard launches, just click next:

AF Wizard.png

For the “Select installation type” window, accept the default setting and click “Next”:

AF Install Type.png

In the “Select destination server” window, accept the default setting and click “Next”:

AF Dest Srv.png

In the “Select server roles” window, select “Hyper-V” and click “Next”:

AF Select Role.png

A window will pop up asking if you want to add the features required for Hyper-V.  I’m not sure why they ask about this; I mean, if you don’t add these features, you won’t be able to manage the role.

Make sure the “Include management tools (if applicable)” check box is checked and then click “Add Features”:

AF Add Features.png

In the “Select features” window, accept the defaults and click “Next”:

AF Select Features.png

In the “Hyper-V” window, click “Next”:

AF Hyper-V.png

In order for the virtual machines you create to have access to your network and/or the internet, you’ll need to select a network adapter on your computer to use for a virtual switch.

If you’re going to play around with this stuff, I’d recommend using two NICs in your computer:  one dedicated to Hyper-V and one dedicated to your computer.  If you only have one, that’s okay, too.

In the “Create Virtual Switches” window, select the appropriate network adapter and click “Next”:

AF Create VS.png

In the “Virtual Machine Migration” window, accept the defaults and click “Next”:

AF VM Mig.png

The next window allows you to select where to store your virtual machines and virtual hard drives.  I generally like to create a folder somewhere specifically for storing these, but that’s up to you.

Either browse to a different location or accept the default settings and click “Next”:

AF Def Stores.png

On the “Confirm installation selections” window, click “Install”:

AF Confirm.png

At this point, the Hyper-V role and features will be installed and you’ll have to reboot the server.  Once it’s back up, you’re ready to create a virtual machine.