Home | Projects | SLUUG Hardware Refresh
This is a working log - still in early draft. I'm still finalizing a lot of the config. If the formatting is confusing please disregard, I will be updating this as I make progress.
Create VM on xenpri - virt-install -n nattylight -r 1024 –vcpu 1 –description "OpenNebula front-end, OpenSuSE 12.1 based" -l "http://download.opensuse.org/distribution/openSUSE-current/repo/oss/" –os-type linux –disk /etc/xen/images/nattylight.xvda,size=8 -w bridge=br0 –virt-type xen –arch x86_64
-x brokenmodules=edd xencons=tty
using Yast, or vm-install -
name:nattyice min-mem: 768 max-mem: 1024 vcpu: 2 description: OpenNebula front-end, openSuSE 12.1 based Graphics: none Virtual Disks: y Disk type: 2 (hard disk) Resides: /var/lib/xen/images/nattyice/xvda size: 8g Network adapters: add two Bootable media: 1 (network URL) Install URL: http://download.opensuse.org/distribution/openSUSE-current/repo/oss/
Enter through the rest
Defaults unless otherwise stated Timezone: chicago Destkop: Other - Minimal Server Selection (Text Mode) Suggest Partitioning: uncheck Propose separate home, check Use btrfs Users: open-nebula-admin Password: ON
Root PW: NI Enable Boot from MBR Enable SSH Software: install Patterns-openSUSE-webyast-{ui,ws}
Upon reboot - change hostname to nattyice
wget http://dev.opennebula.org/attachments/download/499/opennebula-3.1.0.tar.gz
$ zypper ar ftp://anorien.csc.warwick.ac.uk/packman/suse/openSUSE_12.1/ Packman
Feel free to use a different mirror - http://packman.links2linux.de/mirrors
zypper ar http://download.opensuse.org/repositories/Virtualization:/Cloud:/OpenNebula:/Testing/openSUSE_12.1/ OpenNebula-Testing
http://download.opensuse.org/repositories/Virtualization:/Cloud:/OpenNebula/openSUSE_12.1/
Install dependencies
zypper install gcc gcc-c++ make patch libopenssl-devel libcurl-devel scons pkg-config sqlite3-devel libxslt-devel libxmlrpc_server_abyss++3 libxmlrpc_client++3 libexpat-devel libxmlrpc_server++3 libxml2-devel ruby ruby-doc-ri ruby-doc-html ruby-devel rubygems xmlrpc-c libmysqlclient-devel rubygem-rake libxmlrpc++3
Install via openSuSE repos
zypper ar http://download.opensuse.org/repositories/Virtualization:/Cloud:/OpenNebula:/Testing/openSUSE_12.1/ OpenNebula-Testing
http://download.opensuse.org/repositories/Virtualization:/Cloud:/OpenNebula/openSUSE_12.1/
zypper install gcc gcc-c++ make patch libopenssl-devel libcurl-devel scons pkg-config sqlite3-devel libxslt-devel libxmlrpc_server_abyss++3 libxmlrpc_client++3 libexpat-devel libxmlrpc_server++3 libxml2-devel ruby ruby-doc-ri ruby-doc-html ruby-devel rubygems xmlrpc-c libmysqlclient-devel rubygem-rake libxmlrpc++3 mysql-community-server
zypper install mysql-community-server
service mysql start
mysql_secure_installation
$ mysql -u root -p Enter password: Welcome to the MySQL monitor. [...] mysql> GRANT ALL PRIVILEGES ON opennebula.* TO 'oneadmin' IDENTIFIED BY 'oneadmin'; Query OK, 0 rows affected (0.00 sec)
gem install libxml-ruby json nokogiri rake xmlparser sqlite3 mysql curb thin uuid sinatra sequel amazon-ec2
zypper install opennebula opennebula-sunstone
Install dependencies as needed - ignore the rubygem-* files we will install these later
id oneadmin
In This case it's UID=1000 GID=1000
useradd --uid 100 -g cloud -d /var/lib/one oneadmin
If there is a conflict use usermod –uid UID to change the user who is in conflict
User's home dirs will eventually be moved to the NFS share for now copy the keys to the home dir of the xen box
ssh-keygen
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ chmod 700 ~/.ssh/ $ chmod 600 ~/.ssh/id_dsa.pub $ chmod 600 ~/.ssh/id_dsa $ chmod 600 ~/.ssh/authorized_keys
vim ~/.ssh/config Host * StrictHostKeyChecking no
$ mkdir ~/.one $ echo "oneadmin:password" > ~/.one/one_auth $ chmod 600 ~/.one/one_auth
one start
onevm list
DB = [ backend = "mysql", server = "localhost", port = 0, user = "oneadmin", passwd = "oneadmin", db_name = "opennebula" ]
visudo oneadmin ALL=(ALL) NOPASSWD: /usr/sbin/xm * oneadmin ALL=(ALL) NOPASSWD: /usr/sbin/xentop * oneadmin ALL=(ALL) NOPASSWD: /usr/sbin/ovs_vsctl *
IM_MAD = [ name = "im_xen", executable = "one_im_ssh", arguments = "xen" ] VM_MAD = [ name = "vmm_xen", executable = "one_vmm_exec", arguments = "xen", default = "vmm_exec/vmm_exec_xen.conf", type = "xen" ]
zypper install ruby
-Set the permissions in the directory and the images exported to very permissive (directories: a+rx, files: a+rw)
-Disable root squashing adding no_root_squash to nfs exporting options
/etc/exports/
/srv/storage/@/home 192.168.118.0/27(crossmnt,rw,no_root_squash,sync,no_subtree_check) 192.168.115.0/27(crossmnt,rw,no_root_squash,sync,no_subtree_check) /srv/storage/@/backups 192.168.115.0/27(crossmnt,rw,no_root_squash,sync,no_subtree_check) 192.168.118.0/27(crossmnt,rw,no_root_squash,sync,no_subtree_check) /var/lib/one 192.168.118.0/27(crossmnt,rw,no_root_squash,sync,no_subtree_check) 192.168.115.0/27(crossmnt,rw,no_root_squash,sync,no_subtree_check)
/etc/fstab (client side)
192.168.115.5:/var/lib/one /var/lib/one nfs rw,hard,intr
# mount --rbind /srv/storage/\@/images/ /var/lib/one/datastores/
/etc/fstab * This Doesn't work either, still experimenting… Probably need to mount device directly since its a subvolume *
/srv/storage/\@/images/ /var/lib/one/datastores/ none defaults,rbind
TM_MAD = [ name = "tm_shared", executable = "one_tm", arguments = "tm_shared/tm_shared.conf" ]
VM_HOOK = [ name = "openvswitch-vlan", on = "RUNNING", command = "vnm/openvswitch-vlan", arguments = "$TEMPLATE", remote = "yes" ]
zypper install python-qt4 python-Twisted python-zope.interface tcpdump kernel-xen-base kernel-xen-devel autoconf automake dotconf gcc glibc libopenssl-devel make
Packages for Debian: python-qt4 python-twisted python-zopeinterface tcpdump linux-headers autoconf automake libdotconf-dev libdotconf1.0 gcc glibc-source libc-dev libssl-dev <sub>Switch to build user
wget http://openvswitch.org/releases/openvswitch-1.2.2.tar.gz
tar -xzf openvswitch-1.2.2.tar.gz cd openvswitch...
./configure --prefix=/usr --localstatedir=/var
$make #make install
#ovsdb-tool create /usr/etc/openvswitch/conf.db vswitchd/vswitch.ovsschema
ovsdb-server --remote=punix:/var/run/openvswitch/db.sock \ --remote=db:Open_vSwitch,manager_options \ --private-key=db:SSL,private_key \ --certificate=db:SSL,certificate \ --bootstrap-ca-cert=db:SSL,ca_cert \ --pidfile --detach
ovs-vsctl ---no-wait init
ovs-vswitchd --pidfile --detach
Note there are a few errors that we should be able to safely diregard since we did not compile the kernel module
* need kernel modules for brctl layer
vSwitch doesn't like kernel 3.1 - we'll likely need to use ebtables which will conflict with xenpri0
It is critical that your opennebula front end can resolve the name of your hypervisors. In this scenario we will use the /etc/hosts file to do this for us.
# vim /etc/hosts 127.0.0.1 xenhost :wq
Switch over to the oneadmin user and add your hosts.
To add a host you use the onehost create command. This command needs to know the information manager (im) driver, the virtual machine monitor (vmm) driver and the network driver that the host is using. In our case we will be adding a xen hypervisor, that is set up to use open vSwitch for its network driver.
# su - oneadmin ~$ onehost create xenhost -i im_xen -v vmm_xen -n ovswitch
Once complete you should now see your host on the onehost list, you can also use onehost show <hostid> to show the details of your host:
oneadmin@xenhost:~$ onehost list ID NAME CLUSTER RVM TCPU FCPU ACPU TMEM FMEM AMEM STAT 1 xenhost - 0 0 0 100 0K 0K 0K on oneadmin@xenhost:~$ onehost show xenhost HOST 1 INFORMATION ID : 1 NAME : xenhost CLUSTER : - STATE : MONITORED IM_MAD : im_xen VM_MAD : vmm_xen VN_MAD : ovswitch LAST MONITORING TIME : 1339032919 HOST SHARES MAX MEM : 16678912 USED MEM (REAL) : 13844480 USED MEM (ALLOCATED) : 0 MAX CPU : 800 USED CPU (REAL) : 107 USED CPU (ALLOCATED) : 0 MAX DISK : 0 USED DISK (REAL) : 0 USED DISK (ALLOCATED) : 0 RUNNING VMS : 0 MONITORING INFORMATION ARCH="x86_64" CPUSPEED="3100" FREECPU="693" FREEMEMORY="2834432" HOSTNAME="xenhost" HYPERVISOR="xen" MODELNAME="AMD FX(tm)-8120 Eight-Core Processor " NETRX="0" NETTX="0" TOTALCPU="800" TOTALMEMORY="16678912" USEDCPU="107" USEDMEMORY="13844480"