When working with KVM bridged interfaces, KVM will automatically name the virtual NIC that is spawned when the VM is started. This typically follows a naming convention of:
vnet0, vnet1, vnet2, ..., vnetN
The virtual NIC names are dynamically applied to each VM instance. Thus, a spawned VM is not guaranteed to receive the same virtual NIC when it is restarted. Generally speaking, this may not be a problem. However, what if you *need* to have a script, or some function whereby it is important to know which virtual NIC is allocated to a specific VM? There are ways of scripting this, but to avoid the headaches of scripting, it may be helpful to just specify a fixed, hard-coded name on the generated virtual NIC of the VM. To do this, you must use the
virsh command line utility.
To implement this, follow the steps below as a user that has rights to use the
- Run the command:
- At the
virshconsole, you need to type the command:
edit <domain/VM Name>(substitute the name of your VM in here)
- This will open up a
vilike interface to edit the XML entries for your VM. NOTE: I am making the assumption that you are using a standard bridged setup. I have not tested this with non-bridged setups, and especially not on libvirt managed bridged setups. Thus, your mileage may vary.
- Locate the XML entry for your network setup. It should look something like this:
<interface type='bridge'> <mac address='00:11:22:33:44:55'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface>
You need to add a line to theNote: the name of the NIC needs to be a valid interface name. All lowercase and underscores work. As an example, I named my VM’s virtual NIC’s to something like this:
interfacetag that looks like this:
vm1_net, vm2_net, vm3_net, ..., vmN_net
- Once it’s entered, it should look something like this:
<interface type='bridge'> <mac address='00:11:22:33:44:55'/> <source bridge='br0'/> <target dev='vm1_net'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface>
- Save the changes and start the VM.
Once everything is set, you should see something like this if you use the
vm1_net Link encap:Ethernet HWaddr 00:11:22:33:44:55 inet6 addr: fe80::fc54:ff:fec7:11/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:6 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:468 (468.0 b) TX bytes:468 (468.0 b) vm2_net Link encap:Ethernet HWaddr 00:11:22:33:44:56 inet6 addr: fe80::fc54:ff:fec7:22/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:0 (0.0 b) TX bytes:468 (468.0 b)
This guarantees that the VM will always start with the virtual NIC name that you specify. In my case, I have VM1 using
vm1_net, and VM2 using
I love using
gedit to make changes to config files in Linux. However, I have recently encountered some odd issues where config files that I edit using
gedit just don’t work properly. However, making the exact same changes with
vim does not have any issues.
Looking at both files (one edited with
gedit, the other with
vim), they look exactly the same…so I thought. Apparently,
gedit likes to add a
\r (carriage return) to the end of some lines. This is a hidden character, so if you open the file with
vi/vim, you won’t see it. However, this hidden character can cause a very nasty side effect to your config files in that some applications will not properly parse the file. As a result, your application (or OS) will not work (talk about a great way to perform a nasty DoS attack).
This is the type of problem that will make you pull your hair out trying to solve. So, the solution? Either use
nano. If you use
gedit, make sure you do a find and replace where you want to find “
\r” and leave the replace textbox blank. This will remove all instances of
\r. Your file will “look” exactly the same, however, you eliminated that pesky hidden carriage return character causing all the problems.
Just about all new servers today have multiple NICs installed. A great way to take advantage of those NICs is to team/bond them together. NIC teaming/bonding is a great feature to add more availability/redundancy, and higher bandwidth to your server.
There are many different types of teaming that you can do. This site lists all the available modes under Linux: http://www.linuxhorizon.ro/bonding.html **NOTE: NIC Bonding/Teaming on Windows is determined by the network driver. Please consult your vendor’s documentation to enable the feature.**
Some of the NIC teaming modes will work without any additional switch configuration, however, other’s will not. I will demonstrate how to perform NIC bonding on Ubuntu Linux 10.04 using mode 4: 802.3ad Dynamic Link Aggregation. This mode will require switch configuration, as it will require the use of the LACP protocol. I will demonstrate how to configure a Cisco Catalyst switch to support this feature, and setting it up is a lot easier than you think. Despite these instructions being performed on Ubuntu, they should also work on other Linux distributions. The switch configuration instructions are also applicable for Windows servers utilizing 802.3ad (LACP) NIC teaming.
Host Configuration (Linux):
The instructions that I used to set up NIC bonding in Ubuntu Linux 10.04.1 were very nicely outlined in this YouTube Video: (The video utilizes bonding mode 1: Round Robin. I would recommend mode 4: Link Aggregation)
For those who don’t have access to YouTube, follow the instructions below:
ifenslave. On Ubuntu this can be performed by:
sudo apt-get install ifenslave
- Create aliases for the bond. To do that, a new file will need to be created at
/etc/modprobe.d/aliases. On Ubuntu, simply running the command:
sudo nano /etc/modprobe.d/aliaseswill open a text editor and create the file.
- In the file, type the following and then save the file:
alias bond0 bonding options bonding mode=4 miimon=100
- Modify the interfaces file to add the
bond0interface. To do that, first open
/etc/network/interfaces. On Ubuntu, simply run the command:
sudo nano /etc/network/interfaces.
- Comment out all existing network interfaces that are active.
- Add the following to the file:
auto bond0 iface bond0 inet static address <<insert ip here>> netmask <<insert netmask here>> gateway <<insert gateway here>> slaves eth0 eth1 # Place in the nic interfaces you are bonding # Place whatever other network info you need here.
- Save the file.
- Shutdown your server, and configure your switch. Once your switch is configured, turn your server on.
For troubleshooting issues, remember that your
bond0 interface is seen by the OS as any type of ethernet interface.
ifdown commands will work on that interface.
Host Configuration (Windows):
As mentioned above, this is dependent upon your drivers. Please consult your driver documentation and software to enable this option.
Switch Configuration (Cisco Catalyst 6000/6500 Series [Running IOS]):
Log into your switch. Once logged in, you must enter
enable mode. Next, follow the steps below:
- Configure your terminal:
- Select the interface range you are using for the bonded/teamed interfaces:
int range [starting interface] - [last interface number]
- Once you have the interface range you wish to modify, add in the following commands:
- Test the configuration, and then save it if you are happy with it.
shut switchport access vlan <<insert your vlan number here>> # All bonded interfaces must be on the same vlan channel-group <<insert the channel group number here (explained below)>> mode active # See below for more information # Place more interface configuration here (if needed) no shut # Don't forget to turn on your interface
In step three above, you need to make a channel group. A channel group tells the switch to treat the specified interfaces as one logical entity. You can have more than one, so pick the number corresponding to the host you are providing the nic teaming to. There are two modes that provide LACP functionality:
passive. Cisco recommends that you enable active mode by default. I have found that this depends on the server. On the Windows servers that I have worked with, active mode worked just fine. However, on the Linux servers that I have tried this with, passive mode worked better.
That should be it. If I made any mistakes, or if you feel that more should be added, please feel free to leave a comment.