OpenSuse Build Service/Installation

(Revise VG setup)
m (grammar/spelling/capitalization)
Line 11: Line 11:
== Networking Overview ==
== Networking Overview ==
-
The Front End (fe) machine provides two major services:  
+
The Front End (FE) machine provides two major services:  
* the webui on build.obs and
* the webui on build.obs and
* the API on apo.obs;
* the API on apo.obs;
It is also a useful holding point for the download service on download.obs
It is also a useful holding point for the download service on download.obs
-
The Back End (be) machine provides the backend services including the schedulers, source server and repo server. The repo server (used by the workers to get required rpms) also doubles as the download server via a reverse-proxy on the fe.
+
The Back End (BE) machine provides the backend services including the schedulers, source server and repo server. The repo server (used by the workers to get required rpms) also doubles as the download server via a reverse-proxy on the FE.
<source lang="bash">
<source lang="bash">
Line 47: Line 47:
</source>
</source>
-
Create some xen volumes
+
Create some Xen volumes
<source lang='bash'>
<source lang='bash'>
lvcreate -L 10G VG_data -n fe_root
lvcreate -L 10G VG_data -n fe_root
Line 62: Line 62:
</source>
</source>
-
Prepare an opensuse minimal image:
+
Prepare an openSUSE minimal image:
<source lang='bash'>
<source lang='bash'>
ROOTFS=/data/11.2min/image-root
ROOTFS=/data/11.2min/image-root
Line 142: Line 142:
== Installing the Backend ==
== Installing the Backend ==
-
On this host we need also to setup openSuse Tools repository:
+
On this host we need also to setup openSUSE Tools repository:
<source lang='bash'>
<source lang='bash'>
cd /etc/zypp/repos.d/;
cd /etc/zypp/repos.d/;
Line 154: Line 154:
zypper in obs-server obs-signer obs-utils createrepo dpkg nfs-client  
zypper in obs-server obs-signer obs-utils createrepo dpkg nfs-client  
# obs-server brings these other packages as dependency. This is just for you to notice which packages are needed for Backend installation
# obs-server brings these other packages as dependency. This is just for you to notice which packages are needed for Backend installation
-
# createrepo & dpkg are only reccomends
+
# createrepo & dpkg are only recommends
-
# nfs client is needed as we use an nfs share for host/be interchange
+
# NFS client is needed as we use an NFS share for host/BE interchange
</source>
</source>
Configure Scheduler architectures
Configure Scheduler architectures
Line 163: Line 163:
</source>
</source>
-
/usr/lib/obs/server/BSConfig.pm needs to point to correct server names corresponding to source server, where workers are going to download the source, and the repository server, where RPM repos are going to be shered to users.
+
/usr/lib/obs/server/BSConfig.pm needs to point to correct server names corresponding to source server, where workers are going to download the source, and the repository server, where RPM repos are going to be shared to users.
<source lang='bash'>
<source lang='bash'>
vi /usr/lib/obs/server/BSConfig.pm
vi /usr/lib/obs/server/BSConfig.pm
Line 174: Line 174:
</source>
</source>
-
Configure services as deamons
+
Configure services as daemons
<source lang='bash'>
<source lang='bash'>
chkconfig --add obsrepserver obssrcserver obsscheduler obsdispatcher obspublisher obswarden obssigner
chkconfig --add obsrepserver obssrcserver obsscheduler obsdispatcher obspublisher obswarden obssigner
Line 191: Line 191:
For version 1.7 there are a new services. You can start them as well:
For version 1.7 there are a new services. You can start them as well:
* '''obswarden'''
* '''obswarden'''
-
   It checks if build hosts are dieing and cleans up hanging builds
+
   It checks if build hosts are dying and cleans up hanging builds
* '''obssigner'''
* '''obssigner'''
   It is used to sign packages via the obs-sign daemon. You need to configure
   It is used to sign packages via the obs-sign daemon. You need to configure
   it in BSConfig.pm before you can use it.
   it in BSConfig.pm before you can use it.
* '''obsservice'''
* '''obsservice'''
-
   The is the source service daemon. OBS 1.7 just comes with a download
+
   This is the source service daemon. OBS 1.7 just comes with a download
   service so far. This feature is considered to be experimental so far,
   service so far. This feature is considered to be experimental so far,
   but can be already extended with own services.
   but can be already extended with own services.
Line 240: Line 240:
</source>
</source>
-
== Installing the Frontend (WebUI and API) ==
+
== Installing the FrontEnd (WebUI and API) ==
-
Start with a minimal Suse install and then add Tools repository where OBS 1.7 is available.
+
Start with a minimal SUSE install and then add Tools repository where OBS 1.7 is available.
<source lang='bash'>
<source lang='bash'>
cd /etc/zypp/repos.d/;
cd /etc/zypp/repos.d/;
Line 255: Line 255:
</source>
</source>
-
'''Setup Mysql'''
+
'''Setup MySQL'''
-
Mysql server needs to be installed and configured to start as deamon
+
MySQL server needs to be installed and configured to start as daemon
<source lang='bash'>
<source lang='bash'>
chkconfig --add mysql
chkconfig --add mysql
rcmysql start
rcmysql start
</source>
</source>
-
Setup a secure installation, if it's the first time starting mysql
+
Setup a secure installation, if it's the first time starting MySQL
<source lang='bash'>
<source lang='bash'>
/usr/bin/mysql_secure_installation
/usr/bin/mysql_secure_installation
</source>
</source>
-
The frontend instance holds 2 applications, the api and the webui. Each one need a database created
+
The frontend instance holds 2 applications, the API and the webui. Each one need a database created
<source lang='bash'>
<source lang='bash'>
mysql -u root -p
mysql -u root -p
Line 282: Line 282:
FLUSH PRIVILEGES;
FLUSH PRIVILEGES;
</source>
</source>
-
Configure your MySQL user and password in the "production:" section of the api config:  
+
Configure your MySQL user and password in the "production:" section of the API config:  
<source lang='bash'>
<source lang='bash'>
vi /srv/www/obs/api/config/database.yml
vi /srv/www/obs/api/config/database.yml
Line 292: Line 292:
   password: ************
   password: ************
</source>
</source>
-
Do the same for the webui. It's configured, by default to use sqlite, but since we're configuring the cluster for production environment, let's bind it to mysql:
+
Do the same for the webui. It's configured, by default to use SQLite, but since we're configuring the cluster for production environment, let's bind it to mysql:
<source lang='bash'>
<source lang='bash'>
Line 313: Line 313:
You can check the migration was successful verifying the “migrated” message at the end of each statement.
You can check the migration was successful verifying the “migrated” message at the end of each statement.
-
'''Setup and configure lighttpd for the api and webui'''
+
'''Setup and configure lighttpd for the API and webui'''
-
You need to setup the correct hostnames to where webui, api and repo server are going to point to
+
You need to setup the correct hostnames to where webui, API and repo server are going to point to
<source lang='bash'>
<source lang='bash'>
vi /etc/lighttpd/vhosts.d/obs.conf
vi /etc/lighttpd/vhosts.d/obs.conf
Line 364: Line 364:
   include_shell "cat vhosts.d/*.conf"   
   include_shell "cat vhosts.d/*.conf"   
</source>
</source>
-
Also, the modules "mod_magnet", "mod_rewrite" and fastcgi need to be enabled by uncommenting the corresponding lines in /etc/lighttpd/modules.conf:
+
Also, the modules "mod_magnet", "mod_rewrite" and FastCGI need to be enabled by uncommenting the corresponding lines in /etc/lighttpd/modules.conf:
<source lang='bash'>
<source lang='bash'>
vi /etc/lighttpd/modules.conf
vi /etc/lighttpd/modules.conf
Line 417: Line 417:
</source>
</source>
-
ligthttpd user and group need to be the onwer of api and webui dirs (as well as log and tmp):
+
ligthttpd user and group need to be the owner of api and webui dirs (as well as log and tmp):
<source lang='bash'>
<source lang='bash'>
chown -R lighttpd.lighttpd /srv/www/obs/{api,webui}
chown -R lighttpd.lighttpd /srv/www/obs/{api,webui}
Line 426: Line 426:
The other 14 hosts on the cluster are reserved to be used as workers, where package builds are going to place.
The other 14 hosts on the cluster are reserved to be used as workers, where package builds are going to place.
-
The same openSuse Tools repository addition must be done for each worker.
+
The same openSUSE Tools repository addition must be done for each worker.
<source lang='bash'>
<source lang='bash'>
cd /etc/zypp/repos.d/;
cd /etc/zypp/repos.d/;
Line 467: Line 467:
pvcreate /dev/disk/by-id/dm-name-360060e8005478600000047860000398c_part4
pvcreate /dev/disk/by-id/dm-name-360060e8005478600000047860000398c_part4
-
Now creat the OBS VG for the host xen worker:
+
Now create the OBS VG for the host Xen worker:
  vgcreate OBS /dev/disk/by-id/dm-name-360060e8005478600000047860000398c_part1
  vgcreate OBS /dev/disk/by-id/dm-name-360060e8005478600000047860000398c_part1
  vgextend OBS /dev/disk/by-id/dm-name-360060e8005478600000047860000398c_part2
  vgextend OBS /dev/disk/by-id/dm-name-360060e8005478600000047860000398c_part2
Line 490: Line 490:
  qemu-svn
  qemu-svn
-
In order to run arm binaries in the xen chroot we need to ensure the initrd inserts binfmt_misc
+
In order to run arm binaries in the Xen chroot we need to ensure the initrd inserts binfmt_misc
in /etc/sysconfig/kernel, set
in /etc/sysconfig/kernel, set
   DOMU_INITRD_MODULES="xennet xenblk binfmt_misc
   DOMU_INITRD_MODULES="xennet xenblk binfmt_misc

Revision as of 11:40, 9 July 2010

Contents

Maemo OBS Cluster Installation Notes

Introduction

Maemo OBS Clusters are divided into 3 instances:

  • Frontend – Containing Webclient, OBS API, Mysql server
  • Backend/Repository Server – Where OBS-Server, dispatcher, scheduler and repository server are installed
  • Workers – Installation of obs-worker package

This Installation Notes covers OBS installation and setup for version 1.7. You can always refer to latest documentation also available on-line: http://gitorious.org/opensuse/build-service/blobs/raw/master/dist/README.SETUP

Networking Overview

The Front End (FE) machine provides two major services:

  • the webui on build.obs and
  • the API on apo.obs;

It is also a useful holding point for the download service on download.obs

The Back End (BE) machine provides the backend services including the schedulers, source server and repo server. The repo server (used by the workers to get required rpms) also doubles as the download server via a reverse-proxy on the FE.

10.1.1.1        host.obs.maemo.org
10.1.1.10       fe.obs.maemo.org build.obs.maemo.org api.obs.maemo.org download.obs.maemo.org
10.1.1.11       be.obs.maemo.org src.obs.maemo.org repo.obs.maemo.org
10.1.1.51       w1.obs.maemo.org
10.1.1.52       w2.obs.maemo.org
10.1.1.53       w3.obs.maemo.org
10.1.1.54       w4.obs.maemo.org
10.1.1.55       w5.obs.maemo.org
10.1.1.56       w6.obs.maemo.org
10.1.1.57       w7.obs.maemo.org
10.1.1.58       w8.obs.maemo.org
10.1.1.59       w9.obs.maemo.org
10.1.1.60       w10.obs.maemo.org
10.1.1.61       w11.obs.maemo.org
10.1.1.62       w12.obs.maemo.org

Creating Xen VMs

Based on http://en.opensuse.org/Build_Service/KIWI/Cookbook

zypper ar http://download.opensuse.org/repositories/Virtualization:/Appliances/openSUSE_11.2/ Virtualization:Appliances
zypper refresh
zypper in kiwi kiwi-templates kiwi-desc-xenboot squashfs

Create some Xen volumes

lvcreate -L 10G VG_data -n fe_root
lvcreate -L 2G  VG_data -n fe_swap
mkswap /dev/VG_data/fe_swap
 
lvcreate -L 10G VG_data -n be_root
lvcreate -L 2G  VG_data -n be_swap
mkswap /dev/VG_data/be_swap
 
lvcreate -L 10G VG_data -n w1_root
lvcreate -L 2G  VG_data -n w1_swap
mkswap /dev/VG_data/w1_swap

Prepare an openSUSE minimal image:

ROOTFS=/data/11.2min/image-root
mkdir /data/11.2min
kiwi --prepare suse-11.2-JeOS --root $ROOTFS --add-profile xenFlavour --add-package less --add-package iputils

Update the config & modules:

ROOTFS=/data/11.2min/image-root
cp -a /lib/modules/2.6.31.12-0.2-xen $ROOTFS/lib/modules/
 
echo default 10.1.1.1 > $ROOTFS/etc/sysconfig/network/routes
echo NETCONFIG_DNS_POLICY=\"\" >> $ROOTFS/etc/sysconfig/network/config
echo nameserver 8.8.8.8 > $ROOTFS/etc/resolv.conf
echo default 10.1.1.1 > $ROOTFS/etc/sysconfig/network/routes
cat << EOF >$ROOTFS/etc/sysconfig/network/ifcfg-eth0
BOOTPROTO='static'
BROADCAST=''
STARTMODE='onboot'
EOF
echo /dev/xvda1 swap swap defaults 0 0 >> $ROOTFS/etc/fstab


Copy to each of the VM root disks

mkfs -text3 /dev/VG_data/fe_root
mount /dev/VG_data/fe_root /mnt/lvm
rsync -HAXa /data/11.2min/image-root/ /mnt/lvm/
echo fe.obs.maemo.org > /mnt/lvm/etc/HOSTNAME
echo "IPADDR='10.1.1.10/24'" >> /mnt/lvm/etc/sysconfig/network/ifcfg-eth0
umount /mnt/lvm
 
mkfs -text3 /dev/VG_data/be_root
mount /dev/VG_data/be_root /mnt/lvm
rsync -HAXa /data/11.2min/image-root/ /mnt/lvm/
echo be.obs.maemo.org > /mnt/lvm/etc/HOSTNAME
echo "IPADDR='10.1.1.11/24'" >> /mnt/lvm/etc/sysconfig/network/ifcfg-eth0
umount /mnt/lvm
 
mkfs -text3 /dev/VG_data/w1_root
mount /dev/VG_data/w1_root /mnt/lvm
rsync -HAXa /data/11.2min/image-root/ /mnt/lvm/
echo w1.obs.maemo.org > /mnt/lvm/etc/HOSTNAME
echo "IPADDR='10.1.1.51/24'" >> /mnt/lvm/etc/sysconfig/network/ifcfg-eth0
umount /mnt/lvm

Configure some Xen VM configs

name='fe'
disk=['phy:/dev/VG_data/fe_root,xvda2,w', 'phy:/dev/VG_data/fe_swap,xvda1,w']
vif=['mac=0D:16:3E:40:B5:FE']
memory='1024'
 
root='/dev/xvda2 ro'
kernel='/boot/vmlinuz-2.6.31.12-0.2-xen'
ramdisk='/boot/initrd-2.6.31.12-0.2-xen'
extra='clocksource=jiffies console=hvc0 xencons=tty'
 
on_poweroff='destroy'
on_reboot='restart'
on_crash='restart'

And start them:

xm create /etc/xen/fe.cfg
xm create /etc/xen/be.cfg
xm create /etc/xen/w1.cfg

Interim config

zypper in wget less iputils terminfo emacs

Installing the Backend

On this host we need also to setup openSUSE Tools repository:

cd /etc/zypp/repos.d/;
wget http://download.opensuse.org/repositories/openSUSE:/Tools/openSUSE_11.2/openSUSE:Tools.repo
zypper ref
# Accept the trust key

Install:

zypper in obs-server obs-signer obs-utils createrepo dpkg nfs-client 
# obs-server brings these other packages as dependency. This is just for you to notice which packages are needed for Backend installation
# createrepo & dpkg are only recommends
# NFS client is needed as we use an NFS share for host/BE interchange

Configure Scheduler architectures

vi /etc/sysconfig/obs-server
OBS_SCHEDULER_ARCHITECTURES="i586 armv5el armv7el“

/usr/lib/obs/server/BSConfig.pm needs to point to correct server names corresponding to source server, where workers are going to download the source, and the repository server, where RPM repos are going to be shared to users.

vi /usr/lib/obs/server/BSConfig.pm
#add
$hostname="be.obs.maemo.org";
 
our $srcserver = "http://src.obs.maemo.org:5352";
our $reposerver = "http://repo.obs.maemo.org:5252";
our $repodownload = "http://$hostname/repositories";

Configure services as daemons

chkconfig --add obsrepserver obssrcserver obsscheduler obsdispatcher obspublisher obswarden obssigner
 
#Check them
chkconfig -l obsrepserver obssrcserver obsscheduler obsdispatcher obspublisher

Start Services

   rcobsrepserver start
   rcobssrcserver start
   rcobsscheduler start
   rcobsdispatcher start
   rcobspublisher  start

For version 1.7 there are a new services. You can start them as well:

  • obswarden
  It checks if build hosts are dying and cleans up hanging builds
  • obssigner
  It is used to sign packages via the obs-sign daemon. You need to configure
  it in BSConfig.pm before you can use it.
  • obsservice
  This is the source service daemon. OBS 1.7 just comes with a download
  service so far. This feature is considered to be experimental so far,
  but can be already extended with own services.

Install Lighttpd

lighttpd also needs to be available on backend server. This is required to provide directory listing on the repositories available on this server when an http/s request to maemo-repo is made through web ui.

 
zypper in lighttpd

Create a new file under /etc/lighttpd/vhosts.d/. It can be obs.conf as well, and add:

vi /etc/lighttpd/vhosts.d/obs.conf
 
$HTTP["host"] =~ "repo.obs.maemo.org" {
  server.name = "repo.obs.maemo.org"
 
  server.document-root = "/srv/obs/repos/"
  dir-listing.activate = "enable"
}

To enable vhosts, remember to uncomment the following in the 'custom includes':

vi /etc/lighttpd/lighttpd.conf
##
  ## custom includes like vhosts.
  ##
  #include "conf.d/config.conf"
  # following line uncommented as per
  # /usr/share/doc/packages/obs-api/README.SETUP
  include_shell "cat vhosts.d/*.conf"

Start lighttpd

#first add it as deamon
chkconfig --add lighttpd
rclighttpd start

Installing the FrontEnd (WebUI and API)

Start with a minimal SUSE install and then add Tools repository where OBS 1.7 is available.

cd /etc/zypp/repos.d/;
wget http://download.opensuse.org/repositories/openSUSE:/Tools/openSUSE_11.2/openSUSE:Tools.repo
zypper ref
# Accept the trust key

Install obs-api (It's going to install lighttpd webserver by dependency for you).

zypper in obs-api memcached

Setup MySQL

MySQL server needs to be installed and configured to start as daemon

chkconfig --add mysql
rcmysql start

Setup a secure installation, if it's the first time starting MySQL

/usr/bin/mysql_secure_installation

The frontend instance holds 2 applications, the API and the webui. Each one need a database created

mysql -u root -p
mysql> create database api_production;
mysql> create database webui_production;

Add obs user to handle these databases

GRANT all privileges
      ON api_production.* 
      TO 'obs'@'%', 'obs'@'localhost' IDENTIFIED BY '************';
GRANT all privileges
      ON webui_production.* 
      TO 'obs'@'%', 'obs'@'localhost' IDENTIFIED BY '************';
FLUSH PRIVILEGES;

Configure your MySQL user and password in the "production:" section of the API config:

vi /srv/www/obs/api/config/database.yml
#change the production section
production:
  adapter: mysql
  database: api_production
  username: obs
  password: ************

Do the same for the webui. It's configured, by default to use SQLite, but since we're configuring the cluster for production environment, let's bind it to mysql:

vi /srv/www/obs/webui/config/database.yml
#change the production section
production:
  adapter: mysql
  database: webui_production
  username: obs
  password: ************

Populate the database

cd /srv/www/obs/api/
RAILS_ENV="production" rake db:migrate
 
cd /srv/www/obs/webui/
RAILS_ENV="production" rake db:migrate

You can check the migration was successful verifying the “migrated” message at the end of each statement.

Setup and configure lighttpd for the API and webui

You need to setup the correct hostnames to where webui, API and repo server are going to point to

vi /etc/lighttpd/vhosts.d/obs.conf
$HTTP["host"] =~ "build" {
  rails_app   = "webui"
  rails_root  = "/srv/www/obs/webui"
  rails_procs = 3
  # production/development are typical values here
  rails_mode  = "production"
 
  log_root = "/srv/www/obs/webui/log"
 
  include "vhosts.d/rails.inc"
}
$HTTP["host"] =~ "api" {
  rails_app   = "api"
  rails_root  = "/srv/www/obs/api"
  rails_procs = 3
  # production/development are typical values here
  rails_mode  = "production"
 
  log_root = "/srv/www/obs/api/log"
 
  include "vhosts.d/rails.inc"
}
$HTTP["host"] =~ "download" {
# This should point to an rsync populated download repo
#  server.name = "download.obs.maemo.org"
#  server.document-root = "/srv/obs/repos/"
 
  proxy.server = ( "" => ( (
        "host" => "10.1.1.11",
        "port" => 80
      ))
  )
}
/source>
 
To enable these vhosts, make sure to '''uncomment''' the following in the 'custom includes' section at the bottom of /etc/lighttpd/lighttpd.conf:
<source lang='bash'>
vi /etc/lighttpd/lighttpd.conf
##
  ## custom includes like vhosts.
  ##
  #include "conf.d/config.conf"
  # following line uncommented as per
  # /usr/share/doc/packages/obs-api/README.SETUP
  include_shell "cat vhosts.d/*.conf"

Also, the modules "mod_magnet", "mod_rewrite" and FastCGI need to be enabled by uncommenting the corresponding lines in /etc/lighttpd/modules.conf:

vi /etc/lighttpd/modules.conf
    server.modules = (
      "mod_access",
    #  "mod_alias",
    #  "mod_auth",
    #  "mod_evasive",
    #  "mod_redirect",
      "mod_rewrite",
    #  "mod_setenv",
    #  "mod_usertrack",
    )
 
    ##
    ## mod_magnet
    ##
    include "conf.d/magnet.conf"
 
    ##
    ## FastCGI (mod_fastcgi)
    ##
    include "conf.d/fastcgi.conf"

You need also to configure /srv/www/obs/webui/config/environments/production.rb to point to correct server names:

vi /srv/www/obs/webui/config/environments/production.rb
FRONTEND_HOST = "api.obs.maemo.org"
FRONTEND_PORT = 80
EXTERNAL_FRONTEND_HOST = "api.obs.maemo.org"
BUGZILLA_HOST = "http://bugs.maemo.org/"
DOWNLOAD_URL = "http://downloads.obs.maemo.org"

Do the same for /srv/www/obs/api/config/environments/production.rb. As soon your backend is not on the same machine as the api (frontend), change the following:

vi /srv/www/obs/api/config/environments/production.rb
SOURCE_HOST = "src.obs.maemo.org"
SOURCE_PORT = 5352

Make sure TCP port 5352 is open on the firewall. Ensure lighttpd and obs ui helpers start:

chkconfig --add memcached
chkconfig --add lighttpd
chkconfig --add obsapidelayed
chkconfig --add obswebuidelayed
 
rcmemcached start
rclighttpd start
rcobsapidelayed start
rcobswebuidelayed start

ligthttpd user and group need to be the owner of api and webui dirs (as well as log and tmp):

chown -R lighttpd.lighttpd /srv/www/obs/{api,webui}

Installing the Workers

The other 14 hosts on the cluster are reserved to be used as workers, where package builds are going to place.

The same openSUSE Tools repository addition must be done for each worker.

cd /etc/zypp/repos.d/;
wget http://download.opensuse.org/repositories/openSUSE:/Tools/openSUSE_11.2/openSUSE:Tools.repo
zypper ref
# Accept the trust key

Install worker:

zypper in obs-worker quemu-svn mount-static bash-static

(mount-static and bash-static are needed on the worker for rpm cross-compile to work)

Edit the file /etc/sysconfig/obs-worker in order to point to correct repository server.

vi /etc/sysconfig/obs-worker
OBS_SRC_SERVER="src.obs.maemo.org:5352"
OBS_REPO_SERVERS="repo.obs.maemo.org:5252"
OBS_VM_TYPE="none"

Each worker has 16 CPUs, so 16 workers need to be started.

Start the worker service:

chkconfig --add obsworker
 
rcobsworker start

Tuning

The lighttpd config was reduced to 3 api/webui child processes from 5.

Revise VG setup

pvcreate /dev/disk/by-id/dm-name-360060e8005478600000047860000398c_part1 pvcreate /dev/disk/by-id/dm-name-360060e8005478600000047860000398c_part2 pvcreate /dev/disk/by-id/dm-name-360060e8005478600000047860000398c_part3 pvcreate /dev/disk/by-id/dm-name-360060e8005478600000047860000398c_part4

Now create the OBS VG for the host Xen worker:

vgcreate OBS /dev/disk/by-id/dm-name-360060e8005478600000047860000398c_part1
vgextend OBS /dev/disk/by-id/dm-name-360060e8005478600000047860000398c_part2


lvcreate -L 20G -n cache OBS
mkfs -text4 /dev/OBS/cache
for i in 1 2 3 4 5 6 7 8 9 10
do
  lvcreate -L 4G -n worker_root$i OBS
  mkfs -text4 /dev/OBS/worker_root$i
  lvcreate -L 1G -n worker_swap$i OBS
  mkswap -f /dev/OBS/worker_swap$i
done

Note in 1.7.3 obsstoragesetup makes /OBS.worker/root1/root but obsworker looks for /OBS.worker/root_1/root (root1 vs root_1)


Also ensure the host has :

obs-worker (1.7.3)
qemu-svn

In order to run arm binaries in the Xen chroot we need to ensure the initrd inserts binfmt_misc in /etc/sysconfig/kernel, set

 DOMU_INITRD_MODULES="xennet xenblk binfmt_misc

then

mkinitrd -k vmlinuz-2.6.31.12-0.2-xen -i initrd-2.6.31.12-0.2-xen-binfmt -B

On the backend machine edit /usr/lib/build/xen.conf to use the correct initrd