[[:home|Home]] | [[:projects|Projects]] | [[:hardwarerefresh_dec11|SLUUG Hardware Refresh]]
===== OpenNebula on Xen on OpenSuSE 12.1 =====
This is a working log - still in early draft. I'm still finalizing a lot of the config. If the formatting is confusing please disregard, I will be updating this as I make progress.
=== Create a VM on Xenpri1 ===
Create VM on xenpri -
virt-install -n nattylight -r 1024 --vcpu 1 --description "OpenNebula front-end, OpenSuSE 12.1 based" -l "http://download.opensuse.org/distribution/openSUSE-current/repo/oss/" --os-type linux --disk /etc/xen/images/nattylight.xvda,size=8 -w bridge=br0 --virt-type xen --arch x86_64
-x brokenmodules=edd xencons=tty
using Yast, or vm-install -
name:nattyice
min-mem: 768
max-mem: 1024
vcpu: 2
description: OpenNebula front-end, openSuSE 12.1 based
Graphics: none
Virtual Disks: y
Disk type: 2 (hard disk)
Resides: /var/lib/xen/images/nattyice/xvda
size: 8g
Network adapters: add two
Bootable media: 1 (network URL)
Install URL: http://download.opensuse.org/distribution/openSUSE-current/repo/oss/
Enter through the rest
===YaST2===
Defaults unless otherwise stated
Timezone: chicago
Destkop: Other - Minimal Server Selection (Text Mode)
Suggest Partitioning: uncheck Propose separate home, check Use btrfs
Users: open-nebula-admin
Password: ON
*Do not use for sys admin, no auto login
Root PW: NI
Enable Boot from MBR
Enable SSH
Software: install Patterns-openSUSE-webyast-{ui,ws}
Upon reboot - change hostname to nattyice
==== OpenNebula 3.1.0 ====
*Get the OpenSuSE x86_64 tarball
wget http://dev.opennebula.org/attachments/download/499/opennebula-3.1.0.tar.gz
*Install the packman repo
$ zypper ar ftp://anorien.csc.warwick.ac.uk/packman/suse/openSUSE_12.1/ Packman
Feel free to use a different mirror - http://packman.links2linux.de/mirrors
* Add OpenNebula:Testing repo
zypper ar http://download.opensuse.org/repositories/Virtualization:/Cloud:/OpenNebula:/Testing/openSUSE_12.1/ OpenNebula-Testing
* Add OpenNebula (may need to do this after install...)
http://download.opensuse.org/repositories/Virtualization:/Cloud:/OpenNebula/openSUSE_12.1/
**Install dependencies**
zypper install gcc gcc-c++ make patch libopenssl-devel libcurl-devel scons pkg-config sqlite3-devel libxslt-devel libxmlrpc_server_abyss++3 libxmlrpc_client++3 libexpat-devel libxmlrpc_server++3 libxml2-devel ruby ruby-doc-ri ruby-doc-html ruby-devel rubygems xmlrpc-c libmysqlclient-devel rubygem-rake libxmlrpc++3
**Install via openSuSE repos**
* Add OpenNebula:Testing repo
zypper ar http://download.opensuse.org/repositories/Virtualization:/Cloud:/OpenNebula:/Testing/openSUSE_12.1/ OpenNebula-Testing
* Add OpenNebula (may need to do this after install...)
http://download.opensuse.org/repositories/Virtualization:/Cloud:/OpenNebula/openSUSE_12.1/
* Install dependencies
zypper install gcc gcc-c++ make patch libopenssl-devel libcurl-devel scons pkg-config sqlite3-devel libxslt-devel libxmlrpc_server_abyss++3 libxmlrpc_client++3 libexpat-devel libxmlrpc_server++3 libxml2-devel ruby ruby-doc-ri ruby-doc-html ruby-devel rubygems xmlrpc-c libmysqlclient-devel rubygem-rake libxmlrpc++3 mysql-community-server
==== MySQL install on openSuSE 12.1 ====
*Install packages
zypper install mysql-community-server
*Start service
service mysql start
*Secure SQL
mysql_secure_installation
*Set root PW to the same as the root user PW, Select Y (defaults) for the rest
*Create users with privileges
$ mysql -u root -p
Enter password:
Welcome to the MySQL monitor. [...]
mysql> GRANT ALL PRIVILEGES ON opennebula.* TO 'oneadmin' IDENTIFIED BY 'oneadmin';
Query OK, 0 rows affected (0.00 sec)
==== Back to Open Nebula ====
* openSuSE apparently forgets the install_gems script from OpenNebula - you'll need to manually install the following gems
gem install libxml-ruby json nokogiri rake xmlparser sqlite3 mysql curb thin uuid sinatra sequel amazon-ec2
* Install OpenNebula and Web UI (sunstone)
zypper install opennebula opennebula-sunstone
Install dependencies as needed - ignore the rubygem-* files we will install these later
* Take note of onedmin user ID and cloud groupid
id oneadmin
In This case it's UID=1000 GID=1000
* Logon to the xen host add a user of with the same UID/GID and a home of /var/lib/one
useradd --uid 100 -g cloud -d /var/lib/one oneadmin
If there is a conflict use usermod --uid UID to change the user who is in conflict
* Set user's password to the same as the front-end
User's home dirs will eventually be moved to the NFS share for now copy the keys to the home dir of the xen box
* Create ssh keypair for oneadmin, use defaults - no passphrase
ssh-keygen
* Add key to auth users
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
* Change permissions of keys for to prevent issues later
$ chmod 700 ~/.ssh/
$ chmod 600 ~/.ssh/id_dsa.pub
$ chmod 600 ~/.ssh/id_dsa
$ chmod 600 ~/.ssh/authorized_keys
* Create user ssh client config to prevent prompting for known hosts
vim ~/.ssh/config
Host *
StrictHostKeyChecking no
* Set OpenNebula to use the same credentials as oneadmin user
$ mkdir ~/.one
$ echo "oneadmin:password" > ~/.one/one_auth
$ chmod 600 ~/.one/one_auth
* Start OneNebula == *Be sure to start as oneadmin user* ==
one start
* Verify the oned is running and working
onevm list
==== Basic config ====
== Enable MySQL ==
* Config /etc/one/oned.conf (you can just hash the example)
DB = [ backend = "mysql",
server = "localhost",
port = 0,
user = "oneadmin",
passwd = "oneadmin",
db_name = "opennebula" ]
== Xen Host ==
* Edit /etc/sudoers to allow oneadmin to control vms
visudo
oneadmin ALL=(ALL) NOPASSWD: /usr/sbin/xm *
oneadmin ALL=(ALL) NOPASSWD: /usr/sbin/xentop *
oneadmin ALL=(ALL) NOPASSWD: /usr/sbin/ovs_vsctl *
* Edit /etc/one/oned.conf to use Xen Driver (hash the kvm config example and unhash the xen example)
IM_MAD = [
name = "im_xen",
executable = "one_im_ssh",
arguments = "xen" ]
VM_MAD = [
name = "vmm_xen",
executable = "one_vmm_exec",
arguments = "xen",
default = "vmm_exec/vmm_exec_xen.conf",
type = "xen" ]
* install ruby on xen host (>=1.8.7)
zypper install ruby
== Storage ==
* Map /etc/xen/images to btrfs raid 10 local array
* Map /var/lib/one/ to NFS server 600G S/W raid 10
* Map /home/* to NFS server 600G S/W raid 10
*On NFS Server be sure to perform these fixes since root user access will be needed
-Set the permissions in the directory and the images exported to very permissive (directories: a+rx, files: a+rw)
-Disable root squashing adding no_root_squash to nfs exporting options\\
/etc/exports/\\
/srv/storage/@/home 192.168.118.0/27(crossmnt,rw,no_root_squash,sync,no_subtree_check) 192.168.115.0/27(crossmnt,rw,no_root_squash,sync,no_subtree_check)
/srv/storage/@/backups 192.168.115.0/27(crossmnt,rw,no_root_squash,sync,no_subtree_check) 192.168.118.0/27(crossmnt,rw,no_root_squash,sync,no_subtree_check)
/var/lib/one 192.168.118.0/27(crossmnt,rw,no_root_squash,sync,no_subtree_check) 192.168.115.0/27(crossmnt,rw,no_root_squash,sync,no_subtree_check)
/etc/fstab (client side)
192.168.115.5:/var/lib/one /var/lib/one nfs rw,hard,intr
* Recursive bind mount images data store. I for some reason am not getting crossmnt option to work on NFS
# mount --rbind /srv/storage/\@/images/ /var/lib/one/datastores/
/etc/fstab *** This Doesn't work either, still experimenting... Probably need to mount device directly since its a subvolume ***\\
/srv/storage/\@/images/ /var/lib/one/datastores/ none defaults,rbind
* Edit /etc/one/oned.conf (default config)
TM_MAD = [
name = "tm_shared",
executable = "one_tm",
arguments = "tm_shared/tm_shared.conf" ]
== Networking - Open vSwitch ==
* Edit /etc/one/oned.conf (unhash open vSwitch VM_HOOK)
VM_HOOK = [
name = "openvswitch-vlan",
on = "RUNNING",
command = "vnm/openvswitch-vlan",
arguments = "$TEMPLATE",
remote = "yes" ]
===**** INSTALL ON HOST(S) ****===
* Install Open vSwitch from source since it has failed openSuSE build project
* Install dependencies
zypper install python-qt4 python-Twisted python-zope.interface tcpdump kernel-xen-base kernel-xen-devel autoconf automake dotconf gcc glibc libopenssl-devel make
Packages for Debian: python-qt4 python-twisted python-zopeinterface tcpdump linux-headers autoconf automake libdotconf-dev libdotconf1.0 gcc glibc-source libc-dev libssl-dev
Switch to build user
* Get tarball
wget http://openvswitch.org/releases/openvswitch-1.2.2.tar.gz
* Untar, change to dir
tar -xzf openvswitch-1.2.2.tar.gz
cd openvswitch...
* Configure with unique options (at this time openvswitch will not compile kernel modules for kernel 3.1)
./configure --prefix=/usr --localstatedir=/var
* Make, then su make install
$make
#make install
* Intialize configuration database
#ovsdb-tool create /usr/etc/openvswitch/conf.db vswitchd/vswitch.ovsschema
* Initialize configuration database server
ovsdb-server --remote=punix:/var/run/openvswitch/db.sock \
--remote=db:Open_vSwitch,manager_options \
--private-key=db:SSL,private_key \
--certificate=db:SSL,certificate \
--bootstrap-ca-cert=db:SSL,ca_cert \
--pidfile --detach
* initialize database
ovs-vsctl ---no-wait init
* Start the Open vSwitch daemon
ovs-vswitchd --pidfile --detach
Note there are a few errors that we should be able to safely diregard since we did not compile the kernel module
***** need kernel modules for brctl layer
**vSwitch doesn't like kernel 3.1 - we'll likely need to use ebtables which will conflict with xenpri0**
=== Add a new host ===
It is critical that your opennebula front end can resolve the name of your hypervisors. In this scenario we will use the /etc/hosts file to do this for us.\\
# vim /etc/hosts
127.0.0.1 xenhost
:wq
Switch over to the oneadmin user and add your hosts.\\
To add a host you use the onehost create command. This command needs to know the information manager (im) driver, the virtual machine monitor (vmm) driver and the network driver that the host is using. In our case we will be adding a xen hypervisor, that is set up to use open vSwitch for its network driver.
# su - oneadmin
~$ onehost create xenhost -i im_xen -v vmm_xen -n ovswitch
Once complete you should now see your host on the onehost list, you can also use onehost show to show the details of your host:\\
oneadmin@xenhost:~$ onehost list
ID NAME CLUSTER RVM TCPU FCPU ACPU TMEM FMEM AMEM STAT
1 xenhost - 0 0 0 100 0K 0K 0K on
oneadmin@xenhost:~$ onehost show xenhost
HOST 1 INFORMATION
ID : 1
NAME : xenhost
CLUSTER : -
STATE : MONITORED
IM_MAD : im_xen
VM_MAD : vmm_xen
VN_MAD : ovswitch
LAST MONITORING TIME : 1339032919
HOST SHARES
MAX MEM : 16678912
USED MEM (REAL) : 13844480
USED MEM (ALLOCATED) : 0
MAX CPU : 800
USED CPU (REAL) : 107
USED CPU (ALLOCATED) : 0
MAX DISK : 0
USED DISK (REAL) : 0
USED DISK (ALLOCATED) : 0
RUNNING VMS : 0
MONITORING INFORMATION
ARCH="x86_64"
CPUSPEED="3100"
FREECPU="693"
FREEMEMORY="2834432"
HOSTNAME="xenhost"
HYPERVISOR="xen"
MODELNAME="AMD FX(tm)-8120 Eight-Core Processor "
NETRX="0"
NETTX="0"
TOTALCPU="800"
TOTALMEMORY="16678912"
USEDCPU="107"
USEDMEMORY="13844480"
====Work In Progress - 21DEC11====
[[:home|Home]] | [[:projects|Projects]] | [[:hardwarerefresh_dec11|SLUUG Hardware Refresh]]