[OPEN-ILS-DEV] Continuous integration - build slaves needed!

Sharp, Chris csharp at georgialibraries.org
Fri Jan 28 08:01:16 EST 2011


GPLS will be able to help too, once our VM server is moved to a new IP block.  (Still in the works).

Chris Sharp
PINES Program Manager
Georgia Public Library Service
1800 Century Place, Suite 150
Atlanta, Georgia 30345
(404) 235-7147
csharp at georgialibraries.org
http://pines.georgialibraries.org/

----- Original Message -----
> From: "Grant Johnson" <fgjohnson at upei.ca>
> To: "Evergreen Development Discussion List" <open-ils-dev at list.georgialibraries.org>
> Sent: Friday, January 28, 2011 7:59:26 AM
> Subject: Re: [OPEN-ILS-DEV] Continuous integration - build slaves needed!
> Dan,
> 
> UPEI can offer a VM or 2. Let me know what you need.
> 
> 
> On 1/28/11, Dan Scott <dan at coffeecode.net> wrote:
> > Hi folks:
> >
> > To make the best use of the buildbot continuous integration server,
> > we
> > need a few different build slaves (servers) that can run the
> > checkout/configure/compile/test steps and report back to the build
> > master.
> >
> > We need different servers because we want to test:
> >
> > 1) different operating systems (e.g. Debian Lenny, Debian Squeeze,
> > Ubuntu Lucid, RHEL 5 / CentOS 5, RHEL 6 / eventually CentOS 6, etc)
> >
> > 2) different combinations of OpenSRF versions + Evergreen versions
> >
> > For #2, it's easy to build many different Evergreen branches against
> > a
> > single version of OpenSRF that has been installed on a given server,
> > but it
> > gets a lot more complex to properly test different versions of
> > OpenSRF +
> > different versions of Evergreen on a single server; you get into
> > having
> > to give the build slave the appropriate permissions to run 'make
> > install' and teaching it how to uninstall OpenSRF absolutely cleanly
> > to
> > avoid leaving any old Perl modules or shared libraries or headers
> > around
> > that you don't want polluting your clean environment. Given the
> > elevated
> > permissions that would give the build slave, and the complexity of
> > getting the OpenSRF uninstall right, in the short term I think we
> > would
> > be better off just installing a known version of OpenSRF (a mix of
> > 1.6.2
> > and perhaps 2.0 prereleases) on the intended Evergreen build slaves
> > and
> > focusing on the Evergreen build results on those.
> >
> > The build master server that Equinox contributed (thank you!) is
> > running
> > Ubuntu Lucid x86_64, and is also being used as a build slave to test
> > OpenSRF branches (currently just trunk, but I will extend that to
> > test
> > rel_2_0 and rel_1_6 in the near future).
> >
> > So - are there community members able to contribute a server or two
> > to
> > the continuous integration cause? The requirements would be pretty
> > low;
> > all that these build slaves need to do once they're set up with a
> > distro, the OpenSRF and/or Evergreen dependencies, and a buildbot
> > slave
> > instance is connect to the build master, check to see if any commits
> > have been made to our repo, and if so then update the local source,
> > configure, compile, and run our unit tests, then report the results
> > back
> > to the build master. I believe a VM with 16 GB of disk and 512 MB of
> > RAM
> > would be plenty. The VM should be generally firewalled off from the
> > rest
> > of the host's network, and incoming access could be limited to SSH
> > so
> > that the build slave's owner could update dependencies from time to
> > time
> > and restart the build slave process.
> >
> > I could see a strong incentive for sites that run on a particular
> > distro
> > wanting to ensure that Evergreen continues to get tested regularly
> > on
> > that distro, even if the devs go crazy and all switch to Fedora
> > (ahem)
> > for their day-to-day development purposes.
> >
> > If we don't want to absorb the overhead of coordinating machines at
> > different institutions with different owners, etc, then another
> > option
> > would be to purchase base Linode VMs (http://www.linode.com) at
> > $20/month/VM and give the CI team members (hah, hi) access to set up
> > and
> > maintain those servers; possibly financed via charitable donations
> > to
> > the Software Freedom Conservancy earmarked for this purpose once we
> > have
> > our signed agreement with the Conservancy? Or similar, I guess, for
> > EC2
> > instances or whatever (although it's out of my realm of experience,
> > buildbot does provide an EC2 build slave that can provision AMIs on
> > demand if we wanted to test that route - but then you're getting
> > into
> > AWS keys and credit cards and complexity of a less technical but
> > possibly murky financial nature).
> >
> > Dan
> >
> 
> --
> Sent from my mobile device
> 
> -----------------------
> F. Grant Johnson
> Systems Manager - Robertson Library
> 
> email : fgjohnson at upei.ca, phone: 566-0630
> cell: 393-4920
> skype: jetsongeorge | twitter: fgjohnson | facebook: fgjohnson | blog:
> granitize.blogspot.com


More information about the Open-ils-dev mailing list