OpenSuse Build Service/Installation
Contents |
[edit] Maemo OBS Cluster Installation Notes
[edit] Introduction
Maemo OBS Clusters are divided into 3 instances:
- Frontend – Containing Webclient, OBS API, Mysql server
- Backend/Repository Server – Where OBS-Server, dispatcher, scheduler and repository server are installed
- Workers – Installation of obs-worker package
This Installation Notes covers OBS installation and setup for version 1.7. You can always refer to latest documentation also available on-line: http://gitorious.org/opensuse/build-service/blobs/raw/master/dist/README.SETUP
[edit] Networking Overview
The Front End (FE) machine provides two major services:
- the webui on build.obs and
- the API on apo.obs;
It is also a useful holding point for the download service on download.obs
The Back End (BE) machine provides the backend services including the schedulers, source server and repo server. The repo server (used by the workers to get required rpms) also doubles as the download server via a reverse-proxy on the FE.
10.1.1.1 host.obs.maemo.org 10.1.1.10 fe.obs.maemo.org build.obs.maemo.org api.obs.maemo.org download.obs.maemo.org 10.1.1.11 be.obs.maemo.org src.obs.maemo.org repo.obs.maemo.org 10.1.1.51 w1.obs.maemo.org 10.1.1.52 w2.obs.maemo.org 10.1.1.53 w3.obs.maemo.org 10.1.1.54 w4.obs.maemo.org 10.1.1.55 w5.obs.maemo.org 10.1.1.56 w6.obs.maemo.org 10.1.1.57 w7.obs.maemo.org 10.1.1.58 w8.obs.maemo.org 10.1.1.59 w9.obs.maemo.org 10.1.1.60 w10.obs.maemo.org 10.1.1.61 w11.obs.maemo.org 10.1.1.62 w12.obs.maemo.org
[edit] Creating Xen VMs
Based on http://en.opensuse.org/Build_Service/KIWI/Cookbook
zypper ar http://download.opensuse.org/repositories/Virtualization:/Appliances/openSUSE_11.2/ Virtualization:Appliances zypper refresh zypper in kiwi kiwi-templates kiwi-desc-xenboot squashfs
Create some Xen volumes
lvcreate -L 10G VG_data -n fe_root lvcreate -L 2G VG_data -n fe_swap mkswap /dev/VG_data/fe_swap lvcreate -L 10G VG_data -n be_root lvcreate -L 2G VG_data -n be_swap mkswap /dev/VG_data/be_swap lvcreate -L 10G VG_data -n w1_root lvcreate -L 2G VG_data -n w1_swap mkswap /dev/VG_data/w1_swap
Prepare an openSUSE minimal image:
ROOTFS=/data/11.2min/image-root mkdir /data/11.2min kiwi --prepare suse-11.2-JeOS --root $ROOTFS --add-profile xenFlavour --add-package less --add-package iputils
Update the config & modules:
ROOTFS=/data/11.2min/image-root cp -a /lib/modules/2.6.31.12-0.2-xen $ROOTFS/lib/modules/ echo default 10.1.1.1 > $ROOTFS/etc/sysconfig/network/routes echo NETCONFIG_DNS_POLICY=\"\" >> $ROOTFS/etc/sysconfig/network/config echo nameserver 8.8.8.8 > $ROOTFS/etc/resolv.conf echo default 10.1.1.1 > $ROOTFS/etc/sysconfig/network/routes cat << EOF >$ROOTFS/etc/sysconfig/network/ifcfg-eth0 BOOTPROTO='static' BROADCAST='' STARTMODE='onboot' EOF echo /dev/xvda1 swap swap defaults 0 0 >> $ROOTFS/etc/fstab
Copy to each of the VM root disks
mkfs -text3 /dev/VG_data/fe_root mount /dev/VG_data/fe_root /mnt/lvm rsync -HAXa /data/11.2min/image-root/ /mnt/lvm/ echo fe.obs.maemo.org > /mnt/lvm/etc/HOSTNAME echo "IPADDR='10.1.1.10/24'" >> /mnt/lvm/etc/sysconfig/network/ifcfg-eth0 umount /mnt/lvm mkfs -text3 /dev/VG_data/be_root mount /dev/VG_data/be_root /mnt/lvm rsync -HAXa /data/11.2min/image-root/ /mnt/lvm/ echo be.obs.maemo.org > /mnt/lvm/etc/HOSTNAME echo "IPADDR='10.1.1.11/24'" >> /mnt/lvm/etc/sysconfig/network/ifcfg-eth0 umount /mnt/lvm mkfs -text3 /dev/VG_data/w1_root mount /dev/VG_data/w1_root /mnt/lvm rsync -HAXa /data/11.2min/image-root/ /mnt/lvm/ echo w1.obs.maemo.org > /mnt/lvm/etc/HOSTNAME echo "IPADDR='10.1.1.51/24'" >> /mnt/lvm/etc/sysconfig/network/ifcfg-eth0 umount /mnt/lvm
Configure some Xen VM configs
name='fe' disk=['phy:/dev/VG_data/fe_root,xvda2,w', 'phy:/dev/VG_data/fe_swap,xvda1,w'] vif=['mac=0D:16:3E:40:B5:FE'] memory='1024' root='/dev/xvda2 ro' kernel='/boot/vmlinuz-2.6.31.12-0.2-xen' ramdisk='/boot/initrd-2.6.31.12-0.2-xen' extra='clocksource=jiffies console=hvc0 xencons=tty' on_poweroff='destroy' on_reboot='restart' on_crash='restart'
And start them:
xm create /etc/xen/fe.cfg xm create /etc/xen/be.cfg xm create /etc/xen/w1.cfg
[edit] Interim config
zypper in wget less iputils terminfo emacs
[edit] Installing the Backend
On this host we need also to setup openSUSE Tools repository:
cd /etc/zypp/repos.d/; wget http://download.opensuse.org/repositories/openSUSE:/Tools/openSUSE_11.2/openSUSE:Tools.repo zypper ref # Accept the trust key
Install:
zypper in obs-server obs-signer obs-utils createrepo dpkg nfs-client # obs-server brings these other packages as dependency. This is just for you to notice which packages are needed for Backend installation # createrepo & dpkg are only recommends # NFS client is needed as we use an NFS share for host/BE interchange
Configure Scheduler architectures
vi /etc/sysconfig/obs-server OBS_SCHEDULER_ARCHITECTURES="i586 armv5el armv7el“
/usr/lib/obs/server/BSConfig.pm needs to point to correct server names corresponding to source server, where workers are going to download the source, and the repository server, where RPM repos are going to be shared to users.
vi /usr/lib/obs/server/BSConfig.pm #add $hostname="be.obs.maemo.org"; our $srcserver = "http://src.obs.maemo.org:5352"; our $reposerver = "http://repo.obs.maemo.org:5252"; our $repodownload = "http://$hostname/repositories";
Configure services as daemons
chkconfig --add obsrepserver obssrcserver obsscheduler obsdispatcher obspublisher obswarden obssigner #Check them chkconfig -l obsrepserver obssrcserver obsscheduler obsdispatcher obspublisher
Start Services
rcobsrepserver start rcobssrcserver start rcobsscheduler start rcobsdispatcher start rcobspublisher start
For version 1.7 there are a new services. You can start them as well:
- obswarden
It checks if build hosts are dying and cleans up hanging builds
- obssigner
It is used to sign packages via the obs-sign daemon. You need to configure it in BSConfig.pm before you can use it.
- obsservice
This is the source service daemon. OBS 1.7 just comes with a download service so far. This feature is considered to be experimental so far, but can be already extended with own services.
Install Lighttpd
lighttpd also needs to be available on backend server. This is required to provide directory listing on the repositories available on this server when an http/s request to maemo-repo is made through web ui.
zypper in lighttpd
Create a new file under /etc/lighttpd/vhosts.d/. It can be obs.conf as well, and add:
vi /etc/lighttpd/vhosts.d/obs.conf $HTTP["host"] =~ "repo.obs.maemo.org" { server.name = "repo.obs.maemo.org" server.document-root = "/srv/obs/repos/" dir-listing.activate = "enable" }
To enable vhosts, remember to uncomment the following in the 'custom includes':
vi /etc/lighttpd/lighttpd.conf ## ## custom includes like vhosts. ## #include "conf.d/config.conf" # following line uncommented as per # /usr/share/doc/packages/obs-api/README.SETUP include_shell "cat vhosts.d/*.conf"
Start lighttpd
#first add it as deamon chkconfig --add lighttpd rclighttpd start
[edit] Installing the FrontEnd (WebUI and API)
Start with a minimal SUSE install and then add Tools repository where OBS 1.7 is available.
cd /etc/zypp/repos.d/; wget http://download.opensuse.org/repositories/openSUSE:/Tools/openSUSE_11.2/openSUSE:Tools.repo zypper ref # Accept the trust key
Install obs-api (It's going to install lighttpd webserver by dependency for you).
zypper in obs-api memcached
Setup MySQL
MySQL server needs to be installed and configured to start as daemon
chkconfig --add mysql
rcmysql start
Setup a secure installation, if it's the first time starting MySQL
/usr/bin/mysql_secure_installation
The frontend instance holds 2 applications, the API and the webui. Each one need a database created
mysql -u root -p mysql> create database api_production; mysql> create database webui_production;
Add obs user to handle these databases
GRANT all privileges ON api_production.* TO 'obs'@'%', 'obs'@'localhost' IDENTIFIED BY '************'; GRANT all privileges ON webui_production.* TO 'obs'@'%', 'obs'@'localhost' IDENTIFIED BY '************'; FLUSH PRIVILEGES;
Configure your MySQL user and password in the "production:" section of the API config:
vi /srv/www/obs/api/config/database.yml #change the production section production: adapter: mysql database: api_production username: obs password: ************
Do the same for the webui. It's configured, by default to use SQLite, but since we're configuring the cluster for production environment, let's bind it to mysql:
vi /srv/www/obs/webui/config/database.yml #change the production section production: adapter: mysql database: webui_production username: obs password: ************
Populate the database
cd /srv/www/obs/api/ RAILS_ENV="production" rake db:migrate cd /srv/www/obs/webui/ RAILS_ENV="production" rake db:migrate
You can check the migration was successful verifying the “migrated” message at the end of each statement.
Setup and configure lighttpd for the API and webui
You need to setup the correct hostnames to where webui, API and repo server are going to point to
vi /etc/lighttpd/vhosts.d/obs.conf $HTTP["host"] =~ "build" { rails_app = "webui" rails_root = "/srv/www/obs/webui" rails_procs = 3 # production/development are typical values here rails_mode = "production" log_root = "/srv/www/obs/webui/log" include "vhosts.d/rails.inc" } $HTTP["host"] =~ "api" { rails_app = "api" rails_root = "/srv/www/obs/api" rails_procs = 3 # production/development are typical values here rails_mode = "production" log_root = "/srv/www/obs/api/log" include "vhosts.d/rails.inc" } $HTTP["host"] =~ "download" { # This should point to an rsync populated download repo # server.name = "download.obs.maemo.org" # server.document-root = "/srv/obs/repos/" proxy.server = ( "" => ( ( "host" => "10.1.1.11", "port" => 80 )) ) } /source> To enable these vhosts, make sure to '''uncomment''' the following in the 'custom includes' section at the bottom of /etc/lighttpd/lighttpd.conf: <source lang='bash'> vi /etc/lighttpd/lighttpd.conf ## ## custom includes like vhosts. ## #include "conf.d/config.conf" # following line uncommented as per # /usr/share/doc/packages/obs-api/README.SETUP include_shell "cat vhosts.d/*.conf"
Also, the modules "mod_magnet", "mod_rewrite" and FastCGI need to be enabled by uncommenting the corresponding lines in /etc/lighttpd/modules.conf:
vi /etc/lighttpd/modules.conf server.modules = ( "mod_access", # "mod_alias", # "mod_auth", # "mod_evasive", # "mod_redirect", "mod_rewrite", # "mod_setenv", # "mod_usertrack", ) ## ## mod_magnet ## include "conf.d/magnet.conf" ## ## FastCGI (mod_fastcgi) ## include "conf.d/fastcgi.conf"
You need also to configure /srv/www/obs/webui/config/environments/production.rb to point to correct server names:
vi /srv/www/obs/webui/config/environments/production.rb FRONTEND_HOST = "api.obs.maemo.org" FRONTEND_PORT = 80 EXTERNAL_FRONTEND_HOST = "api.obs.maemo.org" BUGZILLA_HOST = "http://bugs.maemo.org/" DOWNLOAD_URL = "http://downloads.obs.maemo.org"
Do the same for /srv/www/obs/api/config/environments/production.rb. As soon your backend is not on the same machine as the api (frontend), change the following:
vi /srv/www/obs/api/config/environments/production.rb SOURCE_HOST = "src.obs.maemo.org" SOURCE_PORT = 5352
Make sure TCP port 5352 is open on the firewall. Ensure lighttpd and obs ui helpers start:
chkconfig --add memcached chkconfig --add lighttpd chkconfig --add obsapidelayed chkconfig --add obswebuidelayed rcmemcached start rclighttpd start rcobsapidelayed start rcobswebuidelayed start
ligthttpd user and group need to be the owner of api and webui dirs (as well as log and tmp):
chown -R lighttpd.lighttpd /srv/www/obs/{api,webui}
[edit] Installing the Workers
The other 14 hosts on the cluster are reserved to be used as workers, where package builds are going to place.
The same openSUSE Tools repository addition must be done for each worker.
cd /etc/zypp/repos.d/; wget http://download.opensuse.org/repositories/openSUSE:/Tools/openSUSE_11.2/openSUSE:Tools.repo zypper ref # Accept the trust key
Install worker:
zypper in obs-worker quemu-svn mount-static bash-static
(mount-static and bash-static are needed on the worker for rpm cross-compile to work)
Edit the file /etc/sysconfig/obs-worker in order to point to correct repository server.
vi /etc/sysconfig/obs-worker OBS_SRC_SERVER="src.obs.maemo.org:5352" OBS_REPO_SERVERS="repo.obs.maemo.org:5252" OBS_VM_TYPE="none"
Each worker has 16 CPUs, so 16 workers need to be started.
Start the worker service:
chkconfig --add obsworker
rcobsworker start
[edit] Tuning
The lighttpd config was reduced to 3 api/webui child processes from 5.
[edit] Revise VG setup
pvcreate /dev/disk/by-id/dm-name-360060e8005478600000047860000398c_part1 pvcreate /dev/disk/by-id/dm-name-360060e8005478600000047860000398c_part2 pvcreate /dev/disk/by-id/dm-name-360060e8005478600000047860000398c_part3 pvcreate /dev/disk/by-id/dm-name-360060e8005478600000047860000398c_part4
Now create the OBS VG for the host Xen worker:
vgcreate OBS /dev/disk/by-id/dm-name-360060e8005478600000047860000398c_part1 vgextend OBS /dev/disk/by-id/dm-name-360060e8005478600000047860000398c_part2
lvcreate -L 20G -n cache OBS mkfs -text4 /dev/OBS/cache
for i in 1 2 3 4 5 6 7 8 9 10 do lvcreate -L 4G -n worker_root$i OBS mkfs -text4 /dev/OBS/worker_root$i lvcreate -L 1G -n worker_swap$i OBS mkswap -f /dev/OBS/worker_swap$i done
Note in 1.7.3 obsstoragesetup makes /OBS.worker/root1/root but obsworker looks for /OBS.worker/root_1/root (root1 vs root_1)
Also ensure the host has :
obs-worker (1.7.3) qemu-svn
In order to run arm binaries in the Xen chroot we need to ensure the initrd inserts binfmt_misc in /etc/sysconfig/kernel, set
DOMU_INITRD_MODULES="xennet xenblk binfmt_misc
then
mkinitrd -k vmlinuz-2.6.31.12-0.2-xen -i initrd-2.6.31.12-0.2-xen-binfmt -B
On the backend machine edit /usr/lib/build/xen.conf to use the correct initrd
[edit] Remove Power/PPC from monitor page
We don't target PPC, so best remove it from the build monitor page:
On FE in webui/app/views/monitor/_plots.rhtml
Use html comments to disable the Power line:
<!--<%= render :partial => 'graphs_arch', :locals => { :title => "Power", :prefix => 'ppc_' } %>-->
On FE webui/config/environment.rb
Change the MONITOR_IMAGEMAP section to:
MONITOR_IMAGEMAP = {. 'pc_waiting' => [ ["i586", 'waiting_i586'], ["x86_64", 'waiting_x86_64'] ], 'pc_blocked' => [ ["i586", 'blocked_i586' ], ["x86_64", 'blocked_x86_64'] ], 'pc_workers' => [ ["idle", 'idle_x86_64' ], ['building', 'building_x86_64' ] ], # 'ppc_waiting' => [ # ["ppc", 'waiting_ppc'], # ["ppc64", 'waiting_ppc64'] ], # 'ppc_blocked' => [ # ["ppc", 'blocked_ppc' ], # ["ppc64", 'blocked_ppc64'] ], # 'ppc_workers' => [ # ["idle", 'idle_ppc64' ], # ['building', 'building_ppc64' ] ], 'arm_waiting' => [ ["armv5", 'waiting_armv5el'], ["armv7", 'waiting_armv7el'] ], 'arm_blocked' => [ ["armv5", 'blocked_armv5el' ], ["armv7", 'blocked_armv7el'] ], 'arm_workers' => [ ["idle", 'idle_armv7el' ], ['building', 'building_armv7el' ] ], }
- This page was last modified on 15 July 2010, at 08:29.
- This page has been accessed 32,208 times.