Category Archives: Virtualization

virt-manager without root

To be able to execute virt-manager without root privilege,

– create a new group

# group add libvirt

– Add the required users to this group by editing the /etc/groups file

– Edit the libvirtd configurations:

# vi /etc/libvirt/libvirtd.conf

– Add the following configurations.

unix_sock_group = "libvirt"
auth_unix_rw = "none"

– Restart libvirtd,

#service libvirtd restart

– Logout and try to access libvirtd

$ ssh -X <username>@<host> virt-manager

./arun

Monitor VMware ESXi hardware without root (Nagios)

Download and configure the plugin: http://exchange.nagios.org/directory/Plugins/Operating-Systems/*-Virtual-Environments/VMWare/check_esxi_hardware-2Epy/

– Create a new user in ESXi with no access privilege, you need to login to the ESXi directly to do that.

user

esxi_access

– Enable SSH, and add nagios user to root group:
# vi /etc/group
root:x:0:root,nagios

– Check from the command line, if it works
./check_esxi_hardware.py --host https://esxihost:5989 --user file:credentials.txt --pass file:credentials.txt
OK - Server: Cisco Systems Inc.....

– Configure the credentials files to use the nagios user credentials.

Error when enabling SMTP Restrictions – cPanel/WHM

SMTP restrictions prevent users from bypassing your mail server to send mail.
This feature allows you to configure your server so that the mail
transport agent (MTA), Mailman mailing list software, and root user
are the only accounts able to connect to remote SMTP servers.

Enable from WHM as :

Home >> Security Center >> SMTP Restrictions

When doing so, do you face this error ?

An error occurred attempting to update this setting.
The SMTP restriction is disabled.

When trying to do it from backend,

# /scripts/smtpmailgidonly on

SMTP Mail protection has been disabled. All users may make smtp connections.
There was a problem setting up iptables. You either have an older kernel or a
broken iptables install, or ipt_owner could not be loaded.

In Most cases, the required iptables module, ‘ipt_owner’ would be disabled.
You can confirm it by running # /etc/csf/csftest.pl

If your’s is a VPS, ask the provider to enable it for you, or if
you manage your server, enable it using the command :

# modprobe ipt_owner

Could not connect to https://vcenter_address:7331/

This usually happens from the vSphere web client while opening a console session with virtual machine.

and the log (/var/log/vmware/vsphere-client/logs/vsphere_client_virgo.log) shows something like:

[ERROR] Thread-42 System.err
INFO:oejsh.ContextHandler:started o.e.j.w.WebApp Context{/console,file:/tmp/jetty-0.0.0.0-7331-console.war-_console-any-/webapp/},/usr/lib/vmware-vsphere-client/server/work/tmp/console-distro/webapps/console.war

To fix this set the environment variable VMWARE_JAVA_HOME to proper path:

– SSH to vcenter
# vi /usr/lib/vmware-vsphere-client/server/wrapper/conf/wrapper.conf

– Under Environment variables add:
set.default.VMWARE_JAVA_HOME=/usr/java/jre-vmware

– Restart vsphere-client
# /etc/init.d/vsphere-client restart
Stopping VMware vSphere Web Client...
Stopped VMware vSphere Web Client.
Starting VMware vSphere Web Client...
Intializing registration provider...
Getting SSL certificates
Service with name was updated.
Return code is: Success
Waiting for VMware vSphere Web Client......
running: PID:

Reference: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2060604

vCenter Converter

Convert Linux Physical Server to VMware virtual machine

Download and install vCenter Converter on a windows machine.
http://www.vmware.com/products/converter
Unfortunately this tool does not have a Linux / MAC version.

vCenter Converter

vCenter Converter

In case you see an error: “Permission to perform this operation was denied”, right click and run the program as Administrator.

Permission to perform this operation was denied
Provide the source and destination information, the source is the physical server to be converted and the destination vCenter.
vCenter Converter

vCenter Converter

vCenter Converter

vCenter Converter

Follow the steps, to do the conversion , a temporary OS will be started on the destination, by default it try to get an IP address from the DHCP server so that it can connect to the source machine and fetch the files required. But in case if you don’t have DHCP server you might see error like: “Unable to obtain the IP address of the helper virtual machine” . Fix this issue by setting up a static IP to the helper virtual machine during the conversion setup. Basically the helper VM IP should be able to communicate with the source machine which needs to be migrated

vCenter Converter Static IP

vCenter Converter Static IP

Proceed with the conversion, the duration will be based on the size of the VM and the connectivity if it belongs to another site/LAN.

You may need to change the network configuration (eg: HWADDR) and the MAC address mapping (/etc/udev/rules.d) to get it connected.

Virtuozzo – Basics

Virtuozzo is a software application for enterprise server virtualization that allows an administrator to create virtual environments on a host computer at the operating system (OS) layer. Instead of having one physical machine run multiple operating systems simultaneously, as the virtual machine model used by VMware, Virtuozzo approaches virtualization by running a single OS kernel as its core and exporting that core functionality to various partitions on the host.

Each of the partitions effectively becomes a stand-alone entity called a virtual private server (VPS)

Installation in a CentOS box:
Before proceeding to the installation of virtuzzo make sure you have the partition /vz or
create it if you are installing on a fresh server

/vz contains all container data and parallels virtuzzo containers templates

INSTALLATION
Download the vzinstall-linux-x86_64.bin utilty from the oficial site.
Make the script executable by # chmod a+x vzinstall-linux-x86_64.bin
Run the script by # ./vzinstall-linux-x86_64.bin

You will get the following wizard :
Either you can download and install or install for future or on any other computer.
The configure options allow you to configure the various parameters that the virtuozzo
containers use during the execution. If you select the option download only, after the download is over, go the download directory (root/virtuzzo/Download ) and copy the content of this directory to the system where you are planning to install virtuzzo and execute the following script:

# ./virtuozzo-4.7.0-<build_version>-x86_64.sfx

If you select the option download and install you can either do it in 3 ways:
Default: Select this radio button to download and install the Parallels Virtuozzo Containers program files and one OS template—CentOS 5 (you will need this OS template to create Containers on its basis).

Full: Select this radio button to download all available OS templates to the server and install them there.

Custom: Select this radio button to customize the set of OS templates to download to and install on the server. In this case, once you click the Next button, you will see the Select Templates window where you can choose the necessary OS templates for downloading

In the next step of wizard, click download to start download paralells virtuzzo containers and selected templates to the server.

In the next step you would be asked for the license key.

Install a valid Parallels license by entering the license key number in the field provided and clicking Next. If you plan to activate Parallels Virtuozzo Containers with an activation code,make sure that your server is connected to the Internet

Finally, the installation program displays the Congratulations window.

Leave the Install PVA Agent and Install PVA Management Node check boxes selected to set up the Parallels Virtual Automation application and its components on the server once you restart it. With Parallels Virtual Automation, you can connect to the server and manage Containers using your favorite browser. If you select both check boxes, the installer does the following after restarting the server:

1. Downloads the installation packages for Parallels Virtual Automation from the Parallels website.

2. Install the PVA Agent component on the server. PVA Agent ensures the interaction between your server, the Management Node (see below), and Parallels Virtual Automation. Without this component installed, you will not be able to connect to your server using Parallels Virtual Automation.

3. Creates a special Container on the server and installs the PVA Management Node
component inside it. PVA Management Node (also called Master Server) ensures the
communication between the server running Parallels Virtuozzo Containers (known as Slave
Server) and the Parallels Virtual Automation application. The Master Server keeps a
database with the information about all registered Slave Servers.

If you have already set up a Master Server, you can skip this step (clear Install PVA Management Node check box).

After this step you will be asked for the IP address and hostname and DNS of the container which

will act as the PVA management node.

To log in to Parallels Virtual Automation, launch a Web browser compatible with PVA

The list of currently supported Web browsers is given below:

• Internet Explorer 6.0 and above
• Firefox 2.x and above
• Safari 3.x and above

On the Master Server or any other computer, open your favorite Web browser and log in to Parallels Virtual Automation by typing the Master Server IP address or hostname and TCP port 4648 in the address bar.

http://ipaddressofpvm:4648
Login using the username and password of the container which acts as the PVM

Manually setting up PVA and management node
Create the container : vzctl create CTID –ostemplate centos-6-x86_64 –hostname “hostname”
Set the ip address and nameserver for the created container which will act as the MN

# vzctl start CTID
# vzpkg install CTID -p perl-DBI

Download PVA Management Node installer

# wget http://download.pa.parallels.com/pva/pv ... loy.x86_64
# chmod a+x pva-setup-deploy.x86_64
# ./pva-setup-deploy.x86_64 -d /vz/root/CTID/root/ --extract
# vzctl enter CTID
# cd /root
# ./pva-setup --install

ESXi host fails with a purple diagnostic screen PSOD

This happened while converting KVM VMs to VMware and power them on (method used:http://arunnsblog.com/2013/06/10/migrate-kvm-virtual-machines-to-vmware-esxi/) . It works for a while but then the ESXi crashes with PSOD.

Version : 5.1.0-799733

There were two sort of PSOD messages observed:
1) Crashed while the VM was running

 VMware NOT_IMPLEMENTED bora/vmkernel/sched/memsched.c:17724
 Code start: 0x41802b200000 VMK uptime: 10:19:25:27.335
 cpu4:8243)0x412200cdbaf0:[0x41802b27abff]PanicvPanicInt@vmkernel#nover+0x56 stack: 0x3000000008
 cpu4:8243)0x412200cdbbd0:[0x41802b27b4a7]Panic@vmkernel#nover+0xae stack: 0x100000000000000
 cpu4:8243)0x412200cdbc50:[0x41802b3d88eb]MemSched_WorldCleanup@vmkernel#nover+0x426 stack: 0x4100018a4fb0
 cpu4:8243)0x412200cdbef0:[0x41802b3033b8]WorldCleanup@vmkernel#nover+0x1cb stack: 0x4700cdbf40
 cpu4:8243)0x412200cdbf60:[0x41802b303829]WorldReap@vmkernel#nover+0x318 stack: 0x0
 cpu4:8243)0x412200cdbff0:[0x41802b2483c8]helpFunc@vmkernel#nover+0x517 stack: 0x0
 cpu4:8243)0x412200cdbff8:[0x0] stack: 0x0
 cpu4:8243)base fs=0x0 gs=0x418041000000 Kgs=0x0

VMWare_ESXi_PSOD

VMWare_ESXi_PSOD

2) Crashed during ESXi reboot.

#PF Exception 14 in world 8243:helper13-1 IP 0x41802b880a1e addr 0x410401503020

VMWare_ESXi_PSOD

VMWare_ESXi_PSOD

This seems to be a known issue in VMware ESXi 5.1 and is resolved in patch ESXi510-201212401-BG (Build 914609).
Ref: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2038767

To work around this issue, SSH to the ESXi host and increase the MinZeroCopyBufferLength to 512.

# esxcli system settings advanced set -o /BufferCache/MinZeroCopyBufferLength -i 512

To verify that the setting has been updated, run this command:

# esxcli system settings advanced list --option /BufferCache/MinZeroCopyBufferLength

Before and after change

Before and after change

ESXi host fails with a purple diagnostic screen PSOD

This happened while converting KVM VMs to VMware and power them on (method used: http://arunnsblog.com/2013/06/10/migrate-kvm-virtual-machines-to-vmware-esxi/) . It works for a while but then the ESXi crashes with PSOD.

Version : 5.1.0-799733

There were two sort of PSOD messages observed:
1) Crashed while the VM was running

 VMware NOT_IMPLEMENTED bora/vmkernel/sched/memsched.c:17724
 Code start: 0x41802b200000 VMK uptime: 10:19:25:27.335
 cpu4:8243)0x412200cdbaf0:[0x41802b27abff]PanicvPanicInt@vmkernel#nover+0x56 stack: 0x3000000008
 cpu4:8243)0x412200cdbbd0:[0x41802b27b4a7]Panic@vmkernel#nover+0xae stack: 0x100000000000000
 cpu4:8243)0x412200cdbc50:[0x41802b3d88eb]MemSched_WorldCleanup@vmkernel#nover+0x426 stack: 0x4100018a4fb0
 cpu4:8243)0x412200cdbef0:[0x41802b3033b8]WorldCleanup@vmkernel#nover+0x1cb stack: 0x4700cdbf40
 cpu4:8243)0x412200cdbf60:[0x41802b303829]WorldReap@vmkernel#nover+0x318 stack: 0x0
 cpu4:8243)0x412200cdbff0:[0x41802b2483c8]helpFunc@vmkernel#nover+0x517 stack: 0x0
 cpu4:8243)0x412200cdbff8:[0x0] stack: 0x0
 cpu4:8243)base fs=0x0 gs=0x418041000000 Kgs=0x0
VMWare_ESXi_PSOD

VMWare_ESXi_PSOD

2) Crashed during ESXi reboot.

#PF Exception 14 in world 8243:helper13-1 IP 0x41802b880a1e addr 0x410401503020
VMWare_ESXi_PSOD

VMWare_ESXi_PSOD

This seems to be a known issue in VMware ESXi 5.1 and is resolved in patch ESXi510-201212401-BG (Build 914609).
Ref: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2038767

To work around this issue, SSH to the ESXi host and increase the MinZeroCopyBufferLength to 512.

# esxcli system settings advanced set -o /BufferCache/MinZeroCopyBufferLength -i 512

To verify that the setting has been updated, run this command:

# esxcli system settings advanced list --option /BufferCache/MinZeroCopyBufferLength
Before and after change

Before and after change

 

 

Migrate KVM virtual machines to VMware ESXi

– Shutdown the KVM guest
– convert the QCOW2 or RAW format to VMDK format

# qemu-img convert image.img -O vmdk image.vmdk

– Upload this image to datastore

– Create a new virtual machine with this disk image

– There might be issues with network interface mapping, fix the network mapping at /etc/udev/rules.d/70-persistent-net.rules

Create NAS/SAN storage with openfiler, work with VMware ESXi as shared storage

– Download the openfiler installation ISO , download link

I have downloaded the Installation ISO image (x86/64).

Basically we need to create open filer as a virtual machine with say 20 GB thin provisioning, and attach another disk/datastore to the virtual machine to configure it as SAN.

– Download and install  the Installation ISO image (x86/64) as a virtual machine, noting fancy here just do a normal installation.

– Once rebooted you get a web interface to login on port 446, https://< ip > :446/

– Login with username: openfiler, and password: password

Network access configuration:

set up network access configuration, enter the Network/Host , who are allowed to access.

System –> Network Access Configuration

Network Access Config

Network Access Config

Network Access Config

Network Access Config

Create a new physical volume

Volumes –> Block devices

Create Physical Volume

Create Physical Volume

Click on Edit devices on the hard disk where we going to create new physical volume, this is the extra hard drive we added to the virtual machine at the beginning. (/dev/sdb)

 

Create Physical Volume

Create Physical Volume

Select Physical volume as your partition type ( assuming that you are not using RAID), and set the mode to Primary, click create.

Create Physical Volume

Create Physical Volume

Create new Volume Group

Let us create a volume group for the physical volumes

Volumes –> Volume groups , enter a group name and select the physical drive and click Add volume

Create volume group

Create volume group

 

Create volume group

Create volume group

Create the Volume

Volume –> Add Volume

Enter Volume Name , Description, required space and select File System block.

Create Volume

Create Volume

Create Volume

Create Volume

Now Enable and Add  iSCSI Target

Services –> iSCSI Target Enable, Start

Start iSCSI target

Start iSCSI target

Volumes –> iSCSI Targets –> Target Configuration –> Add new iSCSI Target

Add new iSCSI target

Add new iSCSI target

Setup the LUN Mapping and allow access to iSCSI target

Setup LUN Mapping

Setup LUN Mapping

 

Allow access to iSCSI target

Allow access to iSCSI target

Now the open filer is ready to use.

In VMware ESXi

Create new VMKernel network adapter to use for iSCSI connection

VMKernel Adapter

VMKernel Adapter

VMKernel Adapter

VMKernel Adapter

VMKernel Adapter

VMKernel Adapter

Click on storage adapter –> Add new

Add storage adapter

Add storage adapter

Click on properties, add the VMkernel switch and the discover the iSCSI target.

Add VMKernel to iSCSI adapter

Add VMKernel to iSCSI adapter

Dynamic discovery

Dynamic discovery

Now your ESXi will show the openfiler as a datastore 🙂

Openfiler datastore

Openfiler datastore

More information about Openfiler:

http://www.openfiler.com/products

 

Performance issues with KVM – Redhat

The general performance issue with KVM is due to DISK I/O

– by default the Redhat KVM guest are created with x86_64 architecuture, if you installed 32 bit operating system change this to i686.

<os>
<type arch=’i686′ machine=’rhel5.6.0′>hvm</type>
<boot dev=’hd’/>

– Make sure the hypervisor used is correct in the configuration , either qemu or kvm

<domain type=’kvm’>

or

<domain type=’kvm’>

– Use virtio drivers if the guest is paravirtulized (http://www.ibm.com/developerworks/linux/library/l-virtio/index.html?ca=dgr-lnxw97Viriodth-LX&S_TACT=105AGX59&S_CMP=grlnxw97 http://publib.boulder.ibm.com/infocenter/lnxinfo/v3r0m0/index.jsp?topic=/liaat/liaatbpparavirt.htm)

./arun

Converting LVM virtual machine storage to image

To convert the LVM disk to qcow2 formatted disk image,

Use lvdisplay to get the Logical volume name

$ sudo lvdisplay

Use qemu-img to convert to the required image format

# qemu-img convert -O qcow2 /dev/mapper/lv_name <destination_file>.qcow2

eg:

# qemu-img convert -O qcow2 /dev/mapper/disk1 disk1.qcow2

This will be useful to replicate the virtual machines to other hardware.

./arun

IPv6 configuration for KVM guests

It is simple and straight forward to enable IPv6 on KVM guests

Configure the host machine with IPv6 Address on the bridge interface

cat ifcfg-br0

IPV6INIT=yes
IPV6ADDR=xxxx.xx::10
IPV6_DEFAULTGW=xxxx.xx::1
IPV6_AUTOCONF=no

Configure the interface on virutal machines with ipv6 address

cat ifcfg-eth0

IPV6INIT=yes
IPV6ADDR=xxxx.xx::11
IPV6_DEFAULTGW=xxxx.xx::1
IPV6_AUTOCONF=no

Add the the necessary firewall rules to ip6tables on the host machine

-A FORWARD -m physdev –physdev-is-bridged -j ACCEPT.

./arun

NAT with port forwarding on Virtual Box

You can use the host-only-adapter networking, if you require the virtual machine to be accessible only from the host machine. In this case your virtual machine will not have access to anywhere outside the host. Read more about virtual box networking at http://www.virtualbox.org/manual/ch06.html

On the other hand NAT enabled interface can communicate with clients outside the host, but the host cannot access the services on the virtual machine directly. We need to enabled port forwarding with NAT interface to achieve this.

On Linux:
If you need to have ssh accessible from host machine to virtual machine,

$ VBoxManage modifyvm "VM Name" --natpf1 "openssh,tcp,127.0.0.1,2222,,22"

Where –natpf1 is for adapter1, openssh is just a anme, and you can also input the ip address of virtual machine like

$ VBoxManage modifyvm "VM Name" --natpf1 "openssh,tcp,127.0.0.1,2222,10.0.2.20,22"

(assume the virtual machine ip is 10.0.2.20)

Now you can make ssh connection from host like, $ ssh localhost -p 2222

We can use same port number for port number about 1024 , say for a service running on port 8080 we can forward it with

VBoxManage modifyvm "VM Name" --natpf1 "proxy,tcp,127.0.0.1,8080,10.0.2.20,8080"

These rules will be added to the .VirtualBox/Machines/machine_name/machine_name.xml file like:
< Forwarding name="openssh" proto="1" hostip="127.0.0.1" hostport="2222" guestip=10.0.2.20 guestport="2222"/>

You can forward connection to any port on virtual host like this.

Make sure that the virtual machine interface is closed and the vm is not running while you change it, otherwise the changes will not take effect.

On Windows:

VBoxManage setextradata "VM Name" "VBoxInternal/Devices/pcnet/0/LUN#0/Config/guestssh/Protocol" TCP
VBoxManage setextradata "VM Name" "VBoxInternal/Devices/pcnet/0/LUN#0/Config/guestssh/GuestPort" 22
VBoxManage setextradata "VM Name" "VBoxInternal/Devices/pcnet/0/LUN#0/Config/guestssh/HostPort" 2222

* Replace VM Name with your virtual instance name

./arun

Convert KVM images to Virtual Box (VDI)

It took a while to get the KVM image working with Sun virtual box.

The advantages of a virtual box image is, you can run it on any platform (linux, mac or windows), works without virtualization enabled processor and will work on a 32bit machine
Here are the steps to create an image that works with virtual box:

From the KVM installed server

$ qemu-img convert kvm-os.img -O raw kvm-os-raw.img

Copy the image (kvm-os-raw.img) to virtual box machine

$ VBoxManage convertfromraw --format VDI kvm-os-raw.img vbox.vdi

Converting from raw image file=”kvm-os-raw.img” to file=”vbox.vdi”…
Creating dynamic image with size ….

This will create a virtual box compatible image
Incase required you can compact the image to actual size

$ VBoxManage modifyvdi /home/user/vbox.vdi compact

0%…10%…20%…30%…40%…50%…60%…70%
Here the path to vdi image must be absolute.

Now you can create a new virtual machine from virtual box console/command line, with the vdi image as storage.
Boot the machine and hope for the best 🙂
But it wasn’t easy for me even after this beautiful vdi image, boot hangs with a kernel panic, file system not found.

To fix this issue, we need to recreate the initrd image in the virtual machine:
instructions to do it for redhat:
– Boot the virtual machine in rescue mode with Redhat CD

> linux rescue

# chroot /mnt/sysimage

take a backup of existing initrd

# cp /boot/initrd-2.6.xxx.img initrd-2.6-old

create new initrd image

# mkinitrd -v /boot/initrd-new.img kernel-version

// eg: mkinitrd -v /boot/initrd-new.img 2.6.18-194.8.1.el5

edit the grub configuration and replace the initrd image name with new one

# cat /boot/grub/menu.lst

Reboot the machine and see if it boots 🙂

Hope this will be helpful for someone, I spent hours to get it working 🙂 .
./arun

Netboot KVM guest

To install the KVM guest operating system (eg: RHEL) from the network
– Create the bridge interface on the KVM host machine (http://arunnsblog.com/2010/04/09/virtualization-with-kvm-under-redhat-linux-migrate-vmware-virtual-images-to-kvm/)
– Make sure that the gateway is configured in the bridge interface (GATEWAY=).
– Make sure that you have the required rules added to the iptables:
-A FORWARD -m physdev --physdev-is-bridged -j ACCEPT
– Create virtual machine with supported network interface type (pcnet, rtl8139 used to work)
– Add the mac address of kvm guest to the dhcp server

Start the virtual machine and see if it can kick start from the network.

You can trouble shoot with a tcpdump on the KVM host machine:
tcpdump -i br0 port bootps -vvv -s 1500

./arun

KVM image on LVM

Convert qcow2/raw images to LVM logical volume to use with KVM:

– Convert the qcow2 image to raw format (if it is in qcow2)
$ qemu-img convert image.qcow2 -O raw image.raw

– Create the physical volume for LVM
# pvcreate /dev/sdb
(replace the device with correspond to the system)

– Create the volume group
# vgcreate pool1 /dev/sdb
(replace pool1 with the name as required)

– Create Logical volume with same size as the image
# lvcreate -n justaname --size 50G pool1
(replace justaname and size as per the requirements)
Use lvresize incase you required the change the volume size

– dd the raw image to lvm logical volume
# dd if=image.raw of=/dev/pool1/justaname bs=8M
(Change the block size according to the requirements.

Edit the kvm xml configuration for the corresponding virutal machine to use the logical volume

< disk type='block' device='disk' >
< source dev='/dev/pool1/justaname'/ >
< /code >

./arun

Virtualization with KVM under Redhat Linux, Migrate VMware virtual images to KVM

KVM (Kernel Based Virtual Machine) – http://www.linux-kvm.org/ , is one of the best choice to do virtualization under linux, and especially without extra licensing cost.

Install KVM
To install KVM on redhat enterprise linux:
– Install the machine with 64 bit version of EL5
– Register the machine with redhat (rhn_register)
– enable virtualization entitlement for the system in RHN
– Install KVM package:
# yum install kvm
# yum install virt-manager libvirt libvirt-python python-virtinst

Migration VMware virtual machines to KVM:
– Login to the vmware server
– make single vmdk image with vmware-diskmanager
eg:
# vmware-vdiskmanager -r path_to_vmware_virtualmachine.vmdk -t 0 destination_file_vmware.vmdk
Creating disk ‘destination_file_vmware.vmdk’
Convert: 100% done.
Virtual disk conversion successful.

– Copy the image to KVM server
– Convert the image to KVM supported format with qemu-img
# qemu-img convert destination_file_vmware.vmdk -O qcow2 kvm_supported.img

Create bridge interface to to share the network card.
* This section assumes that you have two nic in your server and would need to have bonding along with bridging and you have static ip required for virtual machines. incase you using dhcp and single network interface create the bridge interface accordingly.

– Create bridge interface:
$ cat /etc/sysconfig/network-scripts/ifcfg-br0

DEVICE=br0
ONBOOT=yes
TYPE=Bridge
IPADDR=11.11.11.11
NETMASK=255.0.0.0
GATEWAY=1.1.1.1

– Configure the bond interface:
$ cat /etc/sysconfig/network-scripts/ifcfg-bond0

DEVICE=bond0
BRIDGE=br0
ONBOOT=yes

– Configure eth0 and eth1
$ cat /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0
MASTER=bond0
SLAVE=yes
ONBOOT=yes

– Change bonding to active-backup , i have faced some issues with xor – might be silly to fix
# cat /etc/modprobe.conf

options bond0 miimon=100 mode=active-backup

– Restart network interface and check the bridge status
# brctl show , it will show bond0 as an enabled interface.

Create KVM virtual machine:
– it can be done from the command line or with virt-manager
– open virt-manager application
– click create new, and select qemu hypervisor
– during disk selection, choose the converted vmware image path
– done, just start it.

Register the virtual machine with Redhat, save some license 😉

– enabled network tools entitlement in RHN
– install the package rhn-virtualization-host on the core machine
# yum install rhn-virtualization-host
– enable virtualization under the properties of host in RHN
– execute the following commands on host machine
# rhn_check
# rhn-profile-sync
– login to virtual machine and use rhn_register, now it will be registered as a virtual machine under the core license.

./arun

Enable Full virtualization in HP DL servers (Intel)

You need to enable hardware virtualization in BIOS if you want to create Fully virtualized instances.

Enter BIOS (F9) –> Advanced Options –> Processor Options –> Enable intel Virtualization Technology

Now you should be able to create Fully virtualized virtual machines from XEN or similar virtualization packages without OS modifications.

./arun