Editing Migrating to Community-driven Infrastructure
Warning: You are not logged in.
Your IP address will be recorded in this page's edit history.
The edit can be undone.
Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision | Your text | ||
Line 1: | Line 1: | ||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
== Introduction == | == Introduction == | ||
- | |||
- | |||
[up to date as of 2013-02-08] | [up to date as of 2013-02-08] | ||
Line 89: | Line 66: | ||
all VMs got migrated to IPHH server, DNS still owned and managed by Nokia [2013-05-29] | all VMs got migrated to IPHH server, DNS still owned and managed by Nokia [2013-05-29] | ||
+ | === Setup with IPHH === | ||
+ | ==== Networks ==== | ||
+ | |||
+ | We have 2 /28 Subnets (213.128.137.0/28 and 213.128.137.16/28) | ||
+ | |||
+ | Networks are configured as follows: | ||
+ | |||
+ | {| | ||
+ | ! IPv4 !! IPv6 !! VLAN !! Xen Bridge !! default GW | ||
+ | |+ | ||
+ | | 213.128.137.0/28 || not yet || 1 || xenbr0 || 213.128.137.14 | ||
+ | |+ | ||
+ | | 213.128.137.16/28 || not yet || 2 || xenbr1 || 213.128.137.17 | ||
+ | |+ | ||
+ | | 10.0.1.0/24 || not yet || 3 || xenbr2 || 10.0.1.1 | ||
+ | |} | ||
+ | |||
+ | IP Plan for vlan 1 | ||
+ | |||
+ | {| | ||
+ | ! IPv4 !! IPv6 !! Hostname | ||
+ | |+ | ||
+ | | 213.128.137.1 || n/a || firewall-carp | ||
+ | |+ | ||
+ | | 213.128.137.2 || n/a || firewall-a | ||
+ | |+ | ||
+ | | 213.128.137.3 || n/a || firewall-b | ||
+ | |+ | ||
+ | | 213.128.137.4 || n/a || blade-a | ||
+ | |+ | ||
+ | | 213.128.137.5 || n/a || blade-b | ||
+ | |+ | ||
+ | | 213.128.137.6 || n/a || portforwarding for monitor | ||
+ | |+ | ||
+ | | 213.128.137.7 || n/a || | ||
+ | |+ | ||
+ | | 213.128.137.8 || n/a || | ||
+ | |+ | ||
+ | | 213.128.137.9 || n/a || | ||
+ | |+ | ||
+ | | 213.128.137.10 || n/a || | ||
+ | |+ | ||
+ | | 213.128.137.11 || n/a || | ||
+ | |+ | ||
+ | | 213.128.137.12 || n/a || IPHH Router 1 | ||
+ | |+ | ||
+ | | 213.128.137.13 || n/a || IPHH Router 2 | ||
+ | |+ | ||
+ | | 213.128.137.14 || n/a || IPHH-VRRP | ||
+ | |} | ||
+ | |||
+ | IP Plan for vlan 2 | ||
+ | |||
+ | {| | ||
+ | ! IPv4 !! IPv6 !! Hostname !! Aliases | ||
+ | |+ | ||
+ | | 213.128.137.17 || n/a || firewall-carp || - | ||
+ | |+ | ||
+ | | 213.128.137.18 || n/a || firewall-a || - | ||
+ | |+ | ||
+ | | 213.128.137.19 || n/a || firewall-b || - | ||
+ | |+ | ||
+ | | 213.128.137.20 || n/a || www || static, maemo.org, planet, downloads | ||
+ | |+ | ||
+ | | 213.128.137.21 || n/a || wiki || bugs | ||
+ | |+ | ||
+ | | 213.128.137.22 || n/a || repository || stage | ||
+ | |+ | ||
+ | | 213.128.137.23 || n/a || mail || lists | ||
+ | |+ | ||
+ | | 213.128.137.24 || n/a || scratchbox || - | ||
+ | |+ | ||
+ | | 213.128.137.25 || n/a || vcs || drop | ||
+ | |+ | ||
+ | | 213.128.137.26 || n/a || garage || - | ||
+ | |+ | ||
+ | | 213.128.137.27 || n/a || builder || - | ||
+ | |+ | ||
+ | | 213.128.137.28 || n/a || talk || - | ||
+ | |+ | ||
+ | | 213.128.137.29 || n/a || DNS || - | ||
+ | |+ | ||
+ | | 213.128.137.30 || n/a || - || - | ||
+ | |} | ||
+ | |||
+ | IP Plan for vlan 3 | ||
+ | |||
+ | {| | ||
+ | ! IPv4 !! IPv6 !! Hostname | ||
+ | |+ | ||
+ | | 10.0.1.1 || n/a || firewall-carp | ||
+ | |+ | ||
+ | | 10.0.1.2 || n/a || firewall-a | ||
+ | |+ | ||
+ | | 10.0.1.3 || n/a || firewall-b | ||
+ | |+ | ||
+ | | 10.0.1.10 || n/a || db | ||
+ | |+ | ||
+ | | 10.0.1.11 || n/a || monitor | ||
+ | |+ | ||
+ | | 10.0.1.200 || n/a || blade-a/IPMI | ||
+ | |+ | ||
+ | | 10.0.1.201 || n/a || blade-b/IPMI | ||
+ | |+ | ||
+ | | 10.0.1.202 || n/a || maemo-switch | ||
+ | |} | ||
+ | |||
+ | ==== Disk Layout of blade-[ab] ==== | ||
+ | |||
+ | Both disks have the following partitioning: | ||
+ | |||
+ | RAID1 Volume for /boot (/dev/md0), consisting of /dev/sda1 and /dev/sdb1 (200M) | ||
+ | |||
+ | RAID1 Volume /dev/md1 consisting of /dev/sda2 and /dev/sdb2 (around 970G) | ||
+ | The RAID1 Volume contains a physical LVM volume. | ||
+ | We only have one VolumeGroup (vg_blade[ab]), which has LogVol00 with 20G as root volume, LogVol01 with 2 Gig as swap and vmstore with the rest as VM Storage mounted on /vmstore. | ||
==== Tips & Tricks for migration ==== | ==== Tips & Tricks for migration ==== | ||
Line 262: | Line 355: | ||
|} | |} | ||
- | |||
=== OS and virtulization on community iron (planning, discussion) === | === OS and virtulization on community iron (planning, discussion) === | ||
Please don't forget to tag your contributions with your nick! | Please don't forget to tag your contributions with your nick! | ||
Line 473: | Line 565: | ||
=== More Detailed Information === | === More Detailed Information === | ||
- | |||
In this sub section more detailed information about the entries in the table can be placed. The intent is to keep the table concise while still being able to have all relevant information at hand. | In this sub section more detailed information about the entries in the table can be placed. The intent is to keep the table concise while still being able to have all relevant information at hand. | ||
Line 606: | Line 697: | ||
* OBS @ TiZen or SuSe : https://bugs.tizen.org/jira/browse/TINF-48?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel | * OBS @ TiZen or SuSe : https://bugs.tizen.org/jira/browse/TINF-48?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel | ||
- | |||
- | + | == Autobuilder and friends == | |
- | + | ||
+ | maemo autobuilder setup | ||
+ | |||
+ | autobuilder consists of multiple VMs | ||
+ | |||
+ | === drop VM === | ||
+ | this VM has /etc/passwd synchronised with garage and ~ folders mounted via NFS from garage | ||
+ | |||
+ | account synchronisation is handled by scripts running on garage VM and then sync is triggered using ssh and scripts in /usr/local/bin | ||
+ | |||
+ | packages are uploaded to /mnt/incoming-builder via SCP | ||
+ | |||
+ | === garage VM === | ||
+ | this is the VM where stuff happens | ||
+ | |||
+ | password/account sync to gforge/postgresql is done using | ||
+ | */10 * * * * root /usr/local/bin/add_groups_users_git_ssh.sh > /tmp/add_groups_users_git_ssh.log dev/null 2>&1 | ||
+ | this also updates ~/.ssh/authorized_keys | ||
+ | |||
+ | garage also handles web extras-uploader (/var/lib/extras-assistant/) - package is uploaded and then moved to the same folder as packages uploaded to drop and then chowned using | ||
+ | |||
+ | /var/lib/extras-assistant/bin/copy_package_files_to_autobuilder.sh | ||
+ | |||
+ | A lot of jobs on garage VM is done using local root crontab (/var/spool/cron/crontabs/root) | ||
+ | |||
+ | after package is uploaded it's processed by buildME | ||
+ | |||
+ | buildME runs as builder user and it's started from cron every minute | ||
+ | * * * * * builder /home/builder/buildme | ||
+ | |||
+ | buildme is configured using /etc/buildme.conf | ||
+ | |||
+ | buildme takes care of couple things | ||
+ | * verify that .tar.gz and other files are correct (checked using checksum from .dsc file) | ||
+ | * select free destination (buildme can handle parallel builds on multiple hosts/users) | ||
+ | * scp all required files to selected destination | ||
+ | * start sbdmock on the destination | ||
+ | * copy results back and resulting .deb to repository incoming folder (result_dir = /mnt/builder/%(product)s and repo_queue = /mnt/incoming/extras-devel/%(product)s/) | ||
+ | * send emails to list and user uploading package | ||
+ | |||
+ | === builder VM === | ||
+ | |||
+ | this VM has standard installation of scratchbox with no targets configured (it's not required for sbdmock) | ||
+ | |||
+ | when sbdmock is started it cleans up old build folder, creates new target and prepares build enviroment and then runs dpkg-buildpackage | ||
+ | |||
+ | sbdmock also generates logfiles that are parsed by buildme | ||
+ | |||
+ | === repository/stage VM === | ||
+ | |||
+ | this is where repository management happens | ||
+ | */2 * * * * repository /home/repository/queue-manage-extras-devel.sh | ||
+ | */5 * * * * repository /home/repository/queue-manage-extras.sh | ||
+ | */5 * * * * repository /home/repository/queue-manage-community-testing.sh | ||
+ | */5 * * * * repository /home/repository/queue-manage-community.sh | ||
+ | |||
+ | those scripts (and scripts inside /home/repository/queue-manager-extras) check for new packages in repository incoming folder and then move those to /var/repository/staging, regenerate Packages | ||
+ | |||
+ | (using sums that were previously cached) and sign it if required and then if any changes happened | ||
+ | #touch .changed file, so we know that we need to sync to live | ||
+ | touch /var/repository/staging/community/.$dist.changed | ||
+ | |||
+ | this file is then checked by | ||
+ | 1003 10634 1 0 Mar18 ? 00:00:00 /bin/sh /usr/local/bin/packages/rqp.sh | ||
+ | started by /etc/init.d/repository-qp | ||
+ | |||
+ | this script starts rsync when required to sync to live repository | ||
+ | |||
+ | this script also starts repository-queue-proc.php that processes repository updates coming from midgard (old package cleanup and promotions) |
Learn more about Contributing to the wiki.