SAN and Tape backup with bacula

Install and configure bacula for SAN and Tape backup

There is already an excellent document about bacula installation and configurations at bacula website. This article is one way of getting SAN and Tape backup working together with single bacula director installation. It assumes that you already have installed and mounted the SAN and configured the tape device.

This configuration aim at:

  • Incremental daily for 20 days
  • Differential weekly for 3 months
  • Monthly full for 6 months
  • Eject the tape to mailslot after the back and notify admin etc.

customise it based on your requirements.

The configurations are tested with HP MSL 2024 Tape library and MSA SAN array.

Bacula server setup

The configurare are done on Redhat Enterprise linux, likely similar for other Linux distros.

  • Create a user for backup
# useradd -d /home/backup backup
  • Install bacula server and create the database and database users : ref: for installation instructions.
  • Create the necessary directories:
# su - backup
$ mkdir -p /home/backup/bacula/var/lock/subsys
$ mkdir /home/backup/bacula/var/run/
  • Configure the director (bacula-dir.conf)

$ cat ~/bacula-dir.conf

# Define the director, common for SAN and Tape
Director { # define myself
Name = {hostname}-dir # use your hostname
DIRport = 9101 # where we listen for UA connections
QueryFile = "/home/backup/bacula/script/query.sql"
WorkingDirectory = "/home/backup/bacula/wdir"
PidDirectory = "/home/backup/bacula/var/run"
Maximum Concurrent Jobs = 3
Password = "{console_password} # Console password
Messages = Daemon
# List of files to be backed up to SAN
FileSet {
 Name = "File Set"
 Include {
 Options {
 signature = MD5
 File = /

 Exclude {
 File = /proc
 File = /tmp
 File = /.journal
 File = /.fsck
# List of files to be backed up to tape
FileSet {
 Name = "tape Set"
 Include {
 Options {
 signature = MD5
 File = /

 Exclude {
 File = /proc
 File = /tmp
 File = /.journal
 File = /.fsck
# Schedule for SAN backup
Schedule {
 Name = "WeeklyCycle"
 Run = Full 1st sun at 01:00
 Run = Differential 2nd-5th sun at 01:00
 Run = Incremental mon-sat at 01:00
# Schedule for tape backup
Schedule {
 Name = "TapeWeeklyFull"
 Run = Level=Full 1st sun at 03:00
# Definition of file storage (SAN)
Storage {
 Name = File
# Do not use "localhost" here
 Address = {FQDN} # N.B. Use a fully qualified name here
 SDPort = 9103
 Password = "{sdpassword}"
 Device = FileStorage
 Media Type = File
# Define storage (Tape)
Storage {
 Name = msl2024
 Address = {director-address}
 SDPort = 9103
 Password = "{director-password}"
 Device = MSL2024
 Media Type = LTO-4
 Autochanger = yes
 Maximum Concurrent Jobs = 3
# Generic catalog service
Catalog {
 Name = MyCatalog
 dbname = "dbname"; dbuser = "dbuser"; dbpassword = "dbpass"
# Tape catalog
Job {
 Name = "TapeBackupCatalog"
 JobDefs = "{dir-host-name}-tape"
 Level = Full
 Schedule = "CatalogAfterTapeBackup"
 RunBeforeJob = "/home/backup/bacula/script/ MyCatalog"
 RunAfterJob = "/home/backup/bacula/script/delete_catalog_backup"
 Write Bootstrap = "/home/backup/bacula/wdir/%n.bsr"
 Priority = 20 # run after main backup
# Default pool definition
Pool { 
 Name = Default
 Pool Type = Backup 
 Recycle = yes # Bacula can automatically recycle Volumes
 AutoPrune = yes # Prune expired volumes
 Volume Retention = 365 days # one year
# General Tape backup pool
Pool { 
 Name = TapePool
 Pool Type = Backup 
 Recycle = yes # Bacula can automatically recycle Volumes
 AutoPrune = yes # Prune expired volumes
 Volume Retention = 6 months # 6 months
 Recycle Oldest Volume = yes
 Storage = msl2024 
 Volume Use Duration = 4 days
## Do the following configurations for each client
# Job definition, define it for each bacula client, replace clientX_hostname, Fileset accordingly
JobDefs {
Name = "{clientX_hostname}"
Type = Backup
Client = {clientX_hostname}-fd
FileSet = "File Set"
Schedule = "WeeklyCycle"
Storage = File
Messages = Standard
Pool = File
Full Backup Pool = Full-Pool-{clientX_hostname}
Incremental Backup Pool = Inc-Pool-{clientX_hostname}
Differential Backup Pool = Diff-Pool-{clientX_hostname}
Priority = 10
Write Bootstrap = "/home/backup/bacula/wdir/%c.bsr"
# Tape

JobDefs {
 Name = "{clientX_hostname}-tape"
 Type = Backup
 Client = {clientX_hostname}-tape-fd
 FileSet = "tape set"
 Schedule = "TapeWeeklyFull"
 Storage = msl2024
 Messages = Standard
 Pool = TapePool
 Full Backup Pool = TapePool
 Priority = 10
 Write Bootstrap = "/home/backup/bacula/wdir/%c.bsr"
# Define Job, replace clientX_hostname
Job {
 Name = "{clientX_hostname}"
 JobDefs = "{clientX_hostname}"
# Tape
Job {
 Name = "{clientX_hostname}"
 JobDefs = "{clientX_hostname}-tape"

# Define restore job
Job {
 Name = "RestoreFiles-{clientX_hostname}"
 Type = Restore
 FileSet="File Set" 
 Storage = File
 Pool = Default
 Messages = Standard
 Where = /home/backup/archive/bacula-restores
# Tape
Job {
 Name = "RestoreFiles-{clientX_hostname}-tape"
 Type = Restore
 FileSet= "tape set"
 Storage = msl2024
 Pool = TapePool
 Messages = Standard
 Where = /home/backup/archive/bacula-restores

# Client (File Services) to backup
Client { 
 Name = {clientX_hostname}-fd
 Address = {client_address}
 FDPort = 9102
 Catalog = MyCatalog
 Password = "{client_password}" # password for FileDaemon
 File Retention = 60 days # 60 days
 Job Retention = 6 months # six months
 AutoPrune = yes # Prune expired Jobs/Files
# Tape
Client {
 Name = {clientX_hostname}-tape-fd
 Address = {client_address}
 FDPort = 9202 # use different port
 Catalog = MyCatalog
 Password = "{client_password}" # password for FileDaemon
 File Retention = 6 months
 Job Retention = 6 months
 AutoPrune = yes
# Pool for each client
Pool {
 Name = Full-Pool-{clientX_hostname}
 Pool Type = Backup
 Recycle = yes
 AutoPrune = yes
 Volume Retention = 6 months
 Maximum Volume Jobs = 1
 Label Format = Full-Pool-{clientX_hostname}-
 Maximum Volumes = 9
Pool { 
 Name = Inc-Pool-{clientX_hostname}
 Pool Type = Backup 
 Recycle = yes # automatically recycle Volumes
 AutoPrune = yes # Prune expired volumes
 Volume Retention = 20 days
 Maximum Volume Jobs = 6
 Label Format = Inc-Pool-{clientX_hostname}-
 Maximum Volumes = 7
Pool { 
 Name = Diff-Pool-{clientX_hostname}
 Pool Type = Backup
 Recycle = yes
 AutoPrune = yes
 Volume Retention = 40 days
 Maximum Volume Jobs = 1
 Label Format = Diff-Pool-{clientX_hostname}-
 Maximum Volumes = 10
# Tape, no extra definition required.

  • Make sure you label the tape and add it to the TapePool, if you tape drive has barcode device available, use
$ bconsole
* label barcode
then select the TapePool

If you have mailslot enabled you could configure the bacula to eject the tape to mailslot after backup finished and will notify.

$ cat /home/backup/bacula/script/delete_catalog_backup
# Unload the tape for storage
mtx -f /dev/sg1 unload 24 # replace 24 with your mailslot
# Send mail
/home/backup/bacula/script/ | mail -s "Tape backup done"

Configure storage daemon

{ # definition of myself
 Name = {director_hostanme}-sd
 SDPort = 9103 # Director's port 
 WorkingDirectory = "/home/backup/bacula/wdir"
 Pid Directory = "/home/backup/bacula/var/run"
 Maximum Concurrent Jobs = 20
# List Directors who are permitted to contact Storage daemon
Director {
 Name = {director_hostname}-dir
 Password = "{director_password}"
Device {
 Name = FileStorage
 Media Type = File
 Archive Device = /media/san/bacula/ # SAN volume
 LabelMedia = yes # lets Bacula label unlabeled media
 Random Access = yes
 AutomaticMount = yes # when device opened, read it
 RemovableMedia = no
 AlwaysOpen = no
# Tape
Autochanger {
 Name = MSL2024
 Device = lto4drive
 Changer Command = "/home/backup/bacula/script/mtx-changer %c %o %S %a %d"
 Changer Device = /dev/sg1 # change it based on your setup
Device {
 Name = lto4drive
 Drive Index = 0
 Media Type = LTO-4
 Archive Device = /dev/nst0
 AutomaticMount = no # when device opened, read it
 AlwaysOpen = no
 RemovableMedia = yes
 RandomAccess = no
 AutoChanger = yes

Client configuration

  • Install the bacula package on the client machines, except use –enable-client-only
  • Remove the director and storage daemon startup scripts
rm /etc/init.d/bacula-dir
rm /etc/init.d/bacula-sd
  • Create necessary directories
mkdir -p /home/backup/bacula/wdir /home/backup/bacula/var/run  /home/backup/bacula/var/lock/subsys/
  • Create bacula-filedeamon configuration for tape and san seperately

SAN (bacula-fd.conf)

FileDaemon { # this is me
 Name = {clientX_hostname}-fd
 FDport = 9102 # where we listen for the director
 WorkingDirectory = /home/backup/bacula/wdir
 Pid Directory = /home/backup/bacula/var/run
 Maximum Concurrent Jobs = 20

Tape  (bacula-fd-tape.conf)

FileDaemon { # this is me
 Name = {clientX_hostname}-tape-fd
 FDport = 9102 # different port than the san
 WorkingDirectory = /home/backup/bacula/wdir
 Pid Directory = /home/backup/bacula/var/run
 Maximum Concurrent Jobs = 20
  • Edit the bacula-fd startup script and add the extra line to start the tape file daemon
daemon /home/backup/bacula/sbin/bacula-fd $2 ${FD_OPTIONS} -c /home/backup/bacula/etc/bacula-fd-tape.conf


Host group based access restriction – Nagios

This is useful especially when you have different host groups belongs to different entities and you need to have access separation.

The basic idea is to use the same login user name in the contact groups. I assume that you have Apache htaccess authentication or LDAP authentication in place.

You may create new contact group of use the already existing one , just make sure your username and contact_name matches.

- Create a contact group
define contactgroup {
 contactgroup_name customer1
 alias Customer1 Servers
 members customer1
- Create the contact
define contact {
 contact_name customer1 #make sure this matches with the username
 alias Customer1 Contact
 service_notification_period 24x7
 host_notifications_enabled 0
 host_notification_period 24x7
 service_notification_options w,u,c,r
 host_notification_options d,u,r
 service_notification_commands notify-by-email
 host_notification_commands host-notify-by-email
- Use this contact group in host definition
define host {
 use generic-alerted-host
 host_name customer1-host
 contact_groups customer1 # make sure this matches with the contactgroup_name
 max_check_attempts 3

Just restart nagios and try to login with the new user account. You may give more privileges to this user if required from cgi.cfg


Detected bug in an extension! Hook FCKeditor_MediaWiki

Detected bug in an extension! Hook FCKeditor_MediaWiki::onCustomEditor failed to return a value; should return true to continue hook processing or false to abort.


#0 mediawiki/includes/Wiki.php(497): wfRunHooks('CustomEditor', Array)
 #1 mediawiki/includes/Wiki.php(63): MediaWiki->performAction(Object(OutputPage), Object(Article), 
Object(Title), Object(User), Object(WebRequest))
 #2 mediawiki/index.php(114): MediaWiki->initialize(Object(Title), Object(Article), Object(OutputPage), 
Object(User), Object(WebRequest))
 #3 {main}

Edit the following file to fix this issue:

 -- public function onCustomEditor(&$article, &$user) {
 ++ public function onCustomEditor($article, $user) {


svn: Can’t convert string from ‘UTF-8’ to native encoding:

"svn: Can't convert string from 'UTF-8' to native encoding:"

This usually happens with special characters in the file name, which the client cannot understand.

Just set proper locale in the client to fix this issues,

$ export LC_CTYPE=en_US.UTF-8
// make sure the locale is properly set.
$ locale


Fix categories and tags in wordpress custom post_type

By default word press does not look in to custom post_types for categories and tags, even though the category names are visible you get a NOT FOUND page when you click on the category.

A work around found for this issue is :

Edit : functions.php

add_filter('pre_get_posts', 'query_post_type');
function query_post_type($query) {
if(is_category() || is_tag()) {
$post_type = get_query_var('post_type');
$post_type = $post_type;
$post_type = array('post','custom_post_type_name','nav_menu_item'); // replace custom_post_type_name with your post_type, and keep nav_menu_item to display menu in category page.
return $query;


Thanks to paranoid  for guiding to the fix . 😉



Replace broken hard drive in software RAID1

This scenario assumes that you have two hard disk with RAID1 setup and one of them is broken (say sdb).

To check the status of RAID:

$ cat /proc/mdstat

Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sda3[1]
730202368 blocks [2/1] [U_]
md1 : active raid1 sda2[1]
264960 blocks [2/1] [U_]
md0 : active (auto-read-only) raid1 sda1[1]
2102464 blocks [2/1] [U_]

you will see [_U] or [U_] if there is a broken RAID.

If required remove the broken hardrive from RAID from all md devices.

# mdadm --manage /dev/md0 --fail /dev/sdb1

# mdadm --manage /dev/md1 --fail /dev/sdb2

# mdadm --manage /dev/md2 --fail /dev/sdb3

Shutdown the machine and replace the hard drive.

Once the server is booted, you will see the new device (either sda or sdb depends on what drive is broken)

# ls -l /dev/sd*

Now we need to replicate the partition schema on the new drive.

sfdisk -d /dev/sda | sfdisk /dev/sdb

// -d     Dump the partitions of a device

We can add the partition to the RAID now, you could verify the partitions with fdisk -l.

# mdadm --manage /dev/md0 --add /dev/sdb1

# mdadm --manage /dev/md1 --add /dev/sdb2

# mdadm --manage /dev/md2 --add /dev/sdb3

It will start sync the data and will be ready once completed.

You may verify the mdstat

# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sda3[0] sdb3[1]
7302023 blocks [2/2] [UU]

md1 : active raid1 sda2[0] sdb2[1]
2649 blocks [2/2] [UU]

md0 : active (auto-read-only) raid1 sda1[0] sdb1[1]
21024 blocks [2/2] [UU]


Enable IPv6 on Direct Admin

It was rather easy to get IPv6 working with DA if you have the IPv6 subnet allocated for your server.

Make sure that you have IPv6 enabled on your DA.

# grep ipv6 /usr/local/directadmin/conf/directadmin.conf

Add the IPv6 Addresses to your direct admin, through IP Management (This will add IPv6 address to the interface).
Enter the IPv6 Address in IP field and keep the Netmask as

Add IPv6 address for your name servers,
go to DNS administration –> Select your name server domain –> Add AAAA records for your Name Servers. Make sure you have added the IPv6 addresses already to DA.

Check your name server is resolving/reachable via IPv6.

Now to add IPv6 Address to hosted domains, select the domain –> Modify user –> select the IPv6 address in “Add Additional IP”. If the IPv6 address is not visible, make sure it is added to DA and to the reseller account you are editing the domain.

Add the IPv6 address(AAAA) to the corresponding domains DNS configuration.

Here we go, ping666666



Extending LVM disk space

Add the new disk drive to the system, you need to reboot the machine and configure the hardware RAID if required.

Add the new disk to Volume group

For e.g.: if the disk is HP with Smart array

# pvcreate /dev/cciss/c0d1
# vgextend <volume_group_name> /dev/cciss/c0d1

Extend the Logical volume:
# lvextend -L<+mention_the_size> /dev/<volume_group>/<logical_volume>
eg: # lvextend -L+25G /dev/localhost/var
       # lvextend -L+10G /dev/localhost/home

Resize the file system:
# resizefs <file_system>
# resizefs /dev/mapper/localhost-var
# resizefs /dev/mapper/localhost-home


Install and configure rsnapshot for central backup (without root privilege)

Download and install RsnapShot
Download the latest package from:

# wget
# rpm -Uvh rsnapshot-1.3.1-1.noarch.rpm

Configure public key authentication

– Enable public key authentication with remote hosts with normal user privilege

local# ssh-keygen -t rsa
local# scp

remote# useradd -c “Backup user” -d /data/home/backup/ backup
remote# su – backup

remote# vi .ssh/authorized_keys

remote# chmod 600 .ssh/authorized_keys

remote# cat >> authorized_keys ; rm

Add the command allowed to execute in the authorized_keys


Create the /home/backup/ script with following contents

    echo “Rejected 1”
    echo “Rejected 2”
    echo “Rejected 3”

$ chmod 700

Create the rsync wrapper script

$ cat  > /usr/local/bin/

/usr/bin/sudo /usr/bin/rsync “$@”;

# chmod 755 /usr/local/bin/

This steps will basically force the ssh connection to execute the rsync as sudo

Grant user to execute rsync as root

backup    ALL=(root) NOPASSWD: /usr/bin/rsync

Configure Rsnapshot

master# cp /etc/rsnapshot.conf.default /etc/rsnapshot.conf

Configure path for cp, rsync, ssh, logger, du etc

set link_dest = 1

change rsync_long_args like

rsync_long_args – –delete –numeric-ids –relative –delete-excluded

If you require daily backup for a week,

interval daily 7

More details are on the how to section for rsnapshot website

Configure the hosts and file system to backup

backup      backup@remotehost:/etc/     remotehost/






Upgrading php to 5.2 or 5.3 in Redhat EL 5

Unfortunately RHEL 5 does not have php.5.2 package, which is required by most of the applications including latest wordpress and drupal.

First thought of compiling php from source, but hard to keep it uptodate. So decided to make the life easier with EPEL/IUS repositories.

Remove all existing php related packages:

# rpm -e php php-mysql php-cli php-pdo php-common

Download and install the EPEL/IUS RPMs

# wget

# wget

incase if the list not working just browse and find the rpm.

Install the RPMs

# rpm -Uvh *-release-*.rpm

Now you can install php 5.2 or 5.3 like:

# yum install php52 php52-mysql



svn over ssh tunnel










It is very often required that you need to commit/update to the svn repository which is only indirectly accessible through a gateway (user can ssh to gateway and gateway can ssh to internal svn server)

Suppose you have a working copy (locally on your machine)  setup with the real svn url (eg : svn+ssh://

– Make ssh connection with local port forwarding to the gateway server

# sudo ssh -L

Change the repository url to localhost, since the local host connection forward to remote svn server through the gateway.

#cd <local_svn_path>

# svn switch –relocate svn+ssh:// svn+ssh://localhost/trunk

Now you should be able to update, commit, etc to/from your repository.

You can switch it back to the original url when you have direct access to repository.



Racktable, Apache+LDAP authentication

Login to the Rack tables as admin:

Add the following line under configuration–> permission

allow {$tab_default}

* This is for read only account, assign extra permissions if required

Configure Apache + LDAP

< Directory /var/www/racktables >
Options +Indexes FollowSymLinks MultiViews
DirectoryIndex index.php
AuthName "Rack Tables"
AuthType Basic
AuthBasicProvider ldap
AuthzLDAPAuthoritative on
AuthLDAPURL "ldaps://,dc=com?uid?sub?(objectClass=< depends_on_ldap >)"

# Bind if required
AuthLDAPBindDN "uid=userid,ou=people,dc=company,dc=com"
AuthLDAPBindPassword "xxxxxx"
AuthLDAPGroupAttribute uniqueMember
AuthLDAPGroupAttributeIsDN on
require ldap-group cn=group_name,dc=company,dc=com
require ldap-attribute cn=group-name-allowed
< /Directory >

* Most of LDAP configs based on your setup

Configure Rack Tables:

Edit the inc/secret.php

Set :

$user_auth_src = 'httpd';
$require_local_account = FALSE;

NOTE: to get the logout working properly make sure the Apache AuthName matches with the one configured for Rack tables authentication.

Referene :



Rsnapshot Lchown

# rsnapshot du localhost
require Lchown
Lchown module not found

Install the Lchown module:

# wget

# tar xvzf Lchown-1.00.tar.gz

# cd Lchown-1.00

# perl Makefile.PL
Checking if your kit is complete…
Looks good

# make install

# rsnapshot du localhost
require Lchown
Lchown module loaded successfully

You can also try installing the module from Perl CPAN

# perl -MCPAN -e ‘install qw(Lchown)’


Install Cpanel on FreeBSD 8.2

– Install FreeBSD with proper network and file system configuration (Ref:

– Install dependency packages:

# pkg_add -r wget

# pkg_add -r perl

# pkg_add -r rsync (required later for ports sync)

# pkg_add -r gmake

To Fix:

creating glibconfig.h
config.status: executing default commands
gmake: not found
child exited with value 127
Died at /usr/local/cpanel/bin/rrdtoolinstall line 109.

# pkg_add -r png // To fix the following error

To Fix : configure: error: requested PNG backend could not be enabled

– Create the following symlinks

# ln -s /usr/local/bin/wget /usr/bin/wget

# ln -s /lib/ /lib/ // To Fix: Shared object “” not found,

# ln -s /lib/ /lib/ // To Fix: Shared object “” not found

ln -s /lib/ /lib/ // To Fix: “” not found

– Install Cpanel

cd /home

wget -N

sh latest

– Once installation successful activate the license (make sure the ip is licensed –  :

#  /usr/local/cpanel/cpkeyclt

– Start Cpanel

– Touch the following file

# touch /etc/rc.d/init.d/function // To fix:  Could not find functions file, your system may be broken

# /etc/init.d/cpanel start

Now you should be able to access Cpanel at https://< yourip >:2087/

Try to upgrade

Exim: /scripts/eximup –force (this will get the free bsd ports as well)
Cpanel : # /scripts/upcp




Drupal 7 issue with SQL Mode TRADITIONAL

PDOException: SQLSTATE[42000]: Syntax error or access violation: 1231 Variable ‘sql_mode’ can’t be set to the value of ‘TRADITIONAL’ in lock_may_be_available() (line 165 of /includes/

This was the case when I installed Drupal 7 with Cpanel/Fantastico, the drupal site was displaying the above error.

This issue is discussed at drupal issues . try to patch it as mentioned in the url.

But for me it works with the following change, just removed the TRADITIONAL mode, not  sure it is the correct way to fix it. You can verify the sql modes at ,  Anyway now there is no errors in drupal site and I am able to login.

(includes/database/mysql/ Line: 65

New file
<  $this->exec(“SET sql_mode=’ANSI,ONLY_FULL_GROUP_BY'”);

Old file
>  $this->exec(“SET sql_mode=’ANSI,TRADITIONAL'”);

Also setting up the sql connection mode to SET SESSION sql_mode = "ANSI,TRADITIONAL"; is an option instead of above change.




Install and configure RSA web agent with Redhat EL5 and Apache

Login to RSA interface:

– Create the apache server as agent host with type web agent
– Generate the config file (zip file containing sdconf.rec) from RSA interface, and download to your local machine

Login to the web server

– Download the RSA web agent installation files from RSA website.

# mkdir -p /var/ace

– Copy and extract (sdconf.rec) the downloaded config file
# chmod 755 sdconf.rec

– Create the sdopts.rec file with the ip address of the machine, if you have multiple ip address assigned to the server or if the RSA we agent is a virtual machine. Otherwise the authentication might break with following kind of errors
“100: Access denied. The RSA ACE/Server rejected the Passcode. Please try again.” , “attempted to authenticate using authenticator “SecurID_Native”. The user belongs to security domain “SystemDomain””

# echo "CLIENT_IP=" > sdopts.rec
# chown -R webuser:webuser /var/ace

– Now install the RSA web agent

# tar xvf WebAgent_70_Apache_RHEL5_32_379_08201024.tar
# cd CD/
# chmod u+x install
# ./install

– Enter the location of sdconf.rec

– Configure the apache virtual host

It was found that web agent breaks if apache starts multiple server thread, so would be better to limit the thread.

< IfModule prefork.c >
StartServers 1
MinSpareServers 1
MaxSpareServers 1
ServerLimit 256
MaxClients 256
MaxRequestsPerChild 4000
< /IfModule >

– Now start apache and you will be able to access the RSA web interface.

Once authenticated, the RSA server will create a node secret for the agent host and will be copied automatically to the web server.

This web interface is mainly useful for the token users to reset or enable to token assigned to him.


Performance issues with KVM – Redhat

The general performance issue with KVM is due to DISK I/O

– by default the Redhat KVM guest are created with x86_64 architecuture, if you installed 32 bit operating system change this to i686.

<type arch=’i686′ machine=’rhel5.6.0′>hvm</type>
<boot dev=’hd’/>

– Make sure the hypervisor used is correct in the configuration , either qemu or kvm

<domain type=’kvm’>


<domain type=’kvm’>

– Use virtio drivers if the guest is paravirtulized (


yum update, IndexError: tuple index out of range

If you happen to get this error while updating server with yum update

File “/usr/lib/python2.4/site-packages/M2Crypto/”, line 82, in https_open
h.request(req.get_method(), req.get_selector(),, headers)
File “/usr/lib/python2.4/”, line 813, in request
if v[0] != 32 or not self.auto_open:
IndexError: tuple index out of range

disable the location aware access from rhn.