Category Archives: Uncategorized

Enable USB Debugging CyanogenMod 11.0

To enable USB debugging,
Settings –> About Phone and tap on build number several times until you see the developer mode
Once the phone in developer mode, settings –> developer options –> and enable USB debugging and other developer options.

CM developer options

CM developer options

A cPanel bug ( for version — 11.40 ) with clamAV

Getting the following error message ?

===========

Original Message --------
Subject: Cron /usr/local/cpanel/3rdparty/bin/freshclam --quiet --no-warnings
From: (Cron Daemon)
To: root@hostname
Date: 12/12/2013 04:38
> ERROR: Can't create temporary directory

/usr/local/cpanel/3rdparty/share/clamav/clamav-xxxxx.tmp

===========

This is a known issue/bug with cPanel in 11.40

Although the directory ‘/usr/local/cpanel/3rdparty/share/clamav’
has enough permission and ownership configured, it is not able to
create the required files/folders.

A temporary workaround to this issue is to change the ownership of
the directory as shown below :

==========

chown clamav:clamav /usr/local/cpanel/3rdparty/share/clamav

==========

Virtuozzo – Basics

Virtuozzo is a software application for enterprise server virtualization that allows an administrator to create virtual environments on a host computer at the operating system (OS) layer. Instead of having one physical machine run multiple operating systems simultaneously, as the virtual machine model used by VMware, Virtuozzo approaches virtualization by running a single OS kernel as its core and exporting that core functionality to various partitions on the host.

Each of the partitions effectively becomes a stand-alone entity called a virtual private server (VPS)

Installation in a CentOS box:
Before proceeding to the installation of virtuzzo make sure you have the partition /vz or
create it if you are installing on a fresh server

/vz contains all container data and parallels virtuzzo containers templates

INSTALLATION
Download the vzinstall-linux-x86_64.bin utilty from the oficial site.
Make the script executable by # chmod a+x vzinstall-linux-x86_64.bin
Run the script by # ./vzinstall-linux-x86_64.bin

You will get the following wizard :
Either you can download and install or install for future or on any other computer.
The configure options allow you to configure the various parameters that the virtuozzo
containers use during the execution. If you select the option download only, after the download is over, go the download directory (root/virtuzzo/Download ) and copy the content of this directory to the system where you are planning to install virtuzzo and execute the following script:

# ./virtuozzo-4.7.0-<build_version>-x86_64.sfx

If you select the option download and install you can either do it in 3 ways:
Default: Select this radio button to download and install the Parallels Virtuozzo Containers program files and one OS template—CentOS 5 (you will need this OS template to create Containers on its basis).

Full: Select this radio button to download all available OS templates to the server and install them there.

Custom: Select this radio button to customize the set of OS templates to download to and install on the server. In this case, once you click the Next button, you will see the Select Templates window where you can choose the necessary OS templates for downloading

In the next step of wizard, click download to start download paralells virtuzzo containers and selected templates to the server.

In the next step you would be asked for the license key.

Install a valid Parallels license by entering the license key number in the field provided and clicking Next. If you plan to activate Parallels Virtuozzo Containers with an activation code,make sure that your server is connected to the Internet

Finally, the installation program displays the Congratulations window.

Leave the Install PVA Agent and Install PVA Management Node check boxes selected to set up the Parallels Virtual Automation application and its components on the server once you restart it. With Parallels Virtual Automation, you can connect to the server and manage Containers using your favorite browser. If you select both check boxes, the installer does the following after restarting the server:

1. Downloads the installation packages for Parallels Virtual Automation from the Parallels website.

2. Install the PVA Agent component on the server. PVA Agent ensures the interaction between your server, the Management Node (see below), and Parallels Virtual Automation. Without this component installed, you will not be able to connect to your server using Parallels Virtual Automation.

3. Creates a special Container on the server and installs the PVA Management Node
component inside it. PVA Management Node (also called Master Server) ensures the
communication between the server running Parallels Virtuozzo Containers (known as Slave
Server) and the Parallels Virtual Automation application. The Master Server keeps a
database with the information about all registered Slave Servers.

If you have already set up a Master Server, you can skip this step (clear Install PVA Management Node check box).

After this step you will be asked for the IP address and hostname and DNS of the container which

will act as the PVA management node.

To log in to Parallels Virtual Automation, launch a Web browser compatible with PVA

The list of currently supported Web browsers is given below:

• Internet Explorer 6.0 and above
• Firefox 2.x and above
• Safari 3.x and above

On the Master Server or any other computer, open your favorite Web browser and log in to Parallels Virtual Automation by typing the Master Server IP address or hostname and TCP port 4648 in the address bar.

http://ipaddressofpvm:4648
Login using the username and password of the container which acts as the PVM

Manually setting up PVA and management node
Create the container : vzctl create CTID –ostemplate centos-6-x86_64 –hostname “hostname”
Set the ip address and nameserver for the created container which will act as the MN

# vzctl start CTID
# vzpkg install CTID -p perl-DBI

Download PVA Management Node installer

# wget http://download.pa.parallels.com/pva/pv ... loy.x86_64
# chmod a+x pva-setup-deploy.x86_64
# ./pva-setup-deploy.x86_64 -d /vz/root/CTID/root/ --extract
# vzctl enter CTID
# cd /root
# ./pva-setup --install

Create NAS/SAN storage with openfiler, work with VMware ESXi as shared storage

– Download the openfiler installation ISO , download link

I have downloaded the Installation ISO image (x86/64).

Basically we need to create open filer as a virtual machine with say 20 GB thin provisioning, and attach another disk/datastore to the virtual machine to configure it as SAN.

– Download and install  the Installation ISO image (x86/64) as a virtual machine, noting fancy here just do a normal installation.

– Once rebooted you get a web interface to login on port 446, https://< ip > :446/

– Login with username: openfiler, and password: password

Network access configuration:

set up network access configuration, enter the Network/Host , who are allowed to access.

System –> Network Access Configuration

Network Access Config

Network Access Config

Network Access Config

Network Access Config

Create a new physical volume

Volumes –> Block devices

Create Physical Volume

Create Physical Volume

Click on Edit devices on the hard disk where we going to create new physical volume, this is the extra hard drive we added to the virtual machine at the beginning. (/dev/sdb)

 

Create Physical Volume

Create Physical Volume

Select Physical volume as your partition type ( assuming that you are not using RAID), and set the mode to Primary, click create.

Create Physical Volume

Create Physical Volume

Create new Volume Group

Let us create a volume group for the physical volumes

Volumes –> Volume groups , enter a group name and select the physical drive and click Add volume

Create volume group

Create volume group

 

Create volume group

Create volume group

Create the Volume

Volume –> Add Volume

Enter Volume Name , Description, required space and select File System block.

Create Volume

Create Volume

Create Volume

Create Volume

Now Enable and Add  iSCSI Target

Services –> iSCSI Target Enable, Start

Start iSCSI target

Start iSCSI target

Volumes –> iSCSI Targets –> Target Configuration –> Add new iSCSI Target

Add new iSCSI target

Add new iSCSI target

Setup the LUN Mapping and allow access to iSCSI target

Setup LUN Mapping

Setup LUN Mapping

 

Allow access to iSCSI target

Allow access to iSCSI target

Now the open filer is ready to use.

In VMware ESXi

Create new VMKernel network adapter to use for iSCSI connection

VMKernel Adapter

VMKernel Adapter

VMKernel Adapter

VMKernel Adapter

VMKernel Adapter

VMKernel Adapter

Click on storage adapter –> Add new

Add storage adapter

Add storage adapter

Click on properties, add the VMkernel switch and the discover the iSCSI target.

Add VMKernel to iSCSI adapter

Add VMKernel to iSCSI adapter

Dynamic discovery

Dynamic discovery

Now your ESXi will show the openfiler as a datastore 🙂

Openfiler datastore

Openfiler datastore

More information about Openfiler:

http://www.openfiler.com/products