Specify the Virtual NIC Name for KVM Bridged VM’s

When working with KVM bridged interfaces, KVM will automatically name the virtual NIC that is spawned when the VM is started. This typically follows a naming convention of:

vnet0, vnet1, vnet2, ..., vnetN

The virtual NIC names are dynamically applied to each VM instance.  Thus, a spawned VM is not guaranteed to receive the same virtual NIC when it is restarted.  Generally speaking, this may not be a problem.  However, what if you *need* to have a script, or some function whereby it is important to know which virtual NIC is allocated to a specific VM?  There are ways of scripting this, but to avoid the headaches of scripting, it may be helpful to just specify a fixed, hard-coded name on the generated virtual NIC of the VM.  To do this, you must use the virsh command line utility.

To implement this, follow the steps below as a user that has rights to use the virsh command:

  1. Run the command: virsh
  2. At the virsh console, you need to type the command: edit <domain/VM Name> (substitute the name of your VM in here)
  3. This will open up a vi like interface to edit the XML entries for your VM. NOTE: I am making the assumption that you are using a standard bridged setup. I have not tested this with non-bridged setups, and especially not on libvirt managed bridged setups. Thus, your mileage may vary.
  4. Locate the XML entry for your network setup. It should look something like this:
    <interface type='bridge'>
          <mac address='00:11:22:33:44:55'/>
          <source bridge='br0'/>
          <model type='virtio'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>

    You need to add a line to the interface tag that looks like this:
    <target dev='the_name_of_your_nic'/>

    Note: the name of the NIC needs to be a valid interface name. All lowercase and underscores work. As an example, I named my VM’s virtual NIC’s to something like this:
    vm1_net, vm2_net, vm3_net, ..., vmN_net

  5. Once it’s entered, it should look something like this:
    <interface type='bridge'>
          <mac address='00:11:22:33:44:55'/>
          <source bridge='br0'/>
          <target dev='vm1_net'/>
          <model type='virtio'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
  6. Save the changes and start the VM.

Once everything is set, you should see something like this if you use the ifconfig command:

vm1_net   Link encap:Ethernet  HWaddr 00:11:22:33:44:55  
          inet6 addr: fe80::fc54:ff:fec7:11/64 Scope:Link
          RX packets:6 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:500 
          RX bytes:468 (468.0 b)  TX bytes:468 (468.0 b)

vm2_net   Link encap:Ethernet  HWaddr 00:11:22:33:44:56  
          inet6 addr: fe80::fc54:ff:fec7:22/64 Scope:Link
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:500 
          RX bytes:0 (0.0 b)  TX bytes:468 (468.0 b)

This guarantees that the VM will always start with the virtual NIC name that you specify. In my case, I have VM1 using vm1_net, and VM2 using vm2_net.

VirtualBox Bridged Networking Driver Problems

For most people this will not be an issue, however, there are a few individuals who are exhibiting network problems when using the VirtualBox Bridged Networking driver on the *host* machine.

The Problem:

The problem is that some systems running Windows 7 with the “VirtualBox Bridged Networking” driver installed will have network outage issues when resuming the system from hibernation.  The only way to fix this is the either reboot the machine, or disable/enable the NIC.

This bug has been reported here: http://www.virtualbox.org/ticket/4677, but it doesn’t seem like it will ever be fixed :(

The temporary solution:

Until Oracle gets around to fixing this bug, the following instructions below will correct the problem.  Do note, following the steps below will disable the bridged networking feature of VirtualBox.  However, utilizing this method gives you a simple avenue to re-enable it if you need to use it.

  1. Click the Start Menu / Start Orb.
  2. Type: “View network connections”
  3. Press Enter.
  4. A window should appear with a list of all the network devices attached to your system.
  5. Right click the adapter that is giving you a problem > Properties
  6. Uncheck “VirtualBox Bridged Networking Driver”
  7. Click OK, and you’re all set.

To enable the feature after it is disabled utilizing this method, follow the instructions above in reverse.

Alternatively, you can also just opt out of installing the VirtualBox Bridged Networking driver altogether.  However, doing so will not allow you to easily enable that great feature.

Enable NIC Teaming/Bonding in Linux with Cisco Catalyst 6000/6500 Series Switches

Just about all new servers today have multiple NICs installed.  A great way to take advantage of those NICs is to team/bond them together.  NIC teaming/bonding is a great feature to add more availability/redundancy, and higher bandwidth to your server.

There are many different types of teaming that you can do.  This site lists all the available modes under Linux: http://www.linuxhorizon.ro/bonding.html **NOTE: NIC Bonding/Teaming on Windows is determined by the network driver.  Please consult your vendor’s documentation to enable the feature.**

Some of the NIC teaming modes will work without any additional switch configuration, however, other’s will not.  I will demonstrate how to perform NIC bonding on Ubuntu Linux 10.04 using mode 4: 802.3ad Dynamic Link Aggregation.  This mode will require switch configuration, as it will require the use of the LACP protocol.  I will demonstrate how to configure a Cisco Catalyst switch to support this feature, and setting it up is a lot easier than you think.  Despite these instructions being performed on Ubuntu, they should also work on other Linux distributions.  The switch configuration instructions are also applicable for Windows servers utilizing 802.3ad (LACP) NIC teaming.

Host Configuration (Linux):

The instructions that I used to set up NIC bonding in Ubuntu Linux 10.04.1 were very nicely outlined in this YouTube Video: (The video utilizes bonding mode 1: Round Robin. I would recommend mode 4: Link Aggregation)

For those who don’t have access to YouTube, follow the instructions below:

  1. Install ifenslave. On Ubuntu this can be performed by: sudo apt-get install ifenslave
  2. Create aliases for the bond.  To do that, a new file will need to be created at /etc/modprobe.d/aliases.  On Ubuntu, simply running the command: sudo nano /etc/modprobe.d/aliases will open a text editor and create the file.
  3. In the file, type the following and then save the file:
    alias bond0 bonding
    options bonding mode=4 miimon=100
  4. Modify the interfaces file to add the bond0 interface. To do that, first open /etc/network/interfaces. On Ubuntu, simply run the command: sudo nano /etc/network/interfaces.
  5. Comment out all existing network interfaces that are active.
  6. Add the following to the file:
    auto bond0
    iface bond0 inet static
        address <<insert ip here>>
        netmask <<insert netmask here>>
        gateway <<insert gateway here>>
        slaves eth0 eth1 # Place in the nic interfaces you are bonding
        # Place whatever other network info you need here.
  7. Save the file.
  8. Shutdown your server, and configure your switch. Once your switch is configured, turn your server on.

For troubleshooting issues, remember that your bond0 interface is seen by the OS as any type of ethernet interface. ifup and ifdown commands will work on that interface.

Host Configuration (Windows):

As mentioned above, this is dependent upon your drivers.  Please consult your driver documentation and software to enable this option.

Switch Configuration (Cisco Catalyst 6000/6500 Series [Running IOS]):

Log into your switch.  Once logged in, you must enter enable mode.  Next, follow the steps below:

  1. Configure your terminal: conf t
  2. Select the interface range you are using for the bonded/teamed interfaces: int range [starting interface] - [last interface number]
  3. Once you have the interface range you wish to modify, add in the following commands:
  4. shut
    switchport access vlan <<insert your vlan number here>> # All bonded interfaces must be on the same vlan
    channel-group <<insert the channel group number here (explained below)>> mode active # See below for more information
    # Place more interface configuration here (if needed)
    no shut  # Don't forget to turn on your interface
  5. Test the configuration, and then save it if you are happy with it.

In step three above, you need to make a channel group. A channel group tells the switch to treat the specified interfaces as one logical entity. You can have more than one, so pick the number corresponding to the host you are providing the nic teaming to. There are two modes that provide LACP functionality: active and passive. Cisco recommends that you enable active mode by default. I have found that this depends on the server. On the Windows servers that I have worked with, active mode worked just fine. However, on the Linux servers that I have tried this with, passive mode worked better.

That should be it. If I made any mistakes, or if you feel that more should be added, please feel free to leave a comment.

Copyright © /sarc All Rights Reserved · Using modified version of Green Hope Theme by Sivan & schiy · Proudly powered by WordPress