[OPEN-ILS-DEV] writing tests for OpenSRF methods?
Dan Scott
dan at coffeecode.net
Wed Dec 28 13:39:33 EST 2011
On Wed, Dec 28, 2011 at 12:53:17PM -0500, Mike Rylander wrote:
> On Dec 27, 2011 6:35 PM, "Dan Scott" <dan at coffeecode.net> wrote:
> >
> > On Tue, Dec 27, 2011 at 05:12:26PM -0600, Scott Prater wrote:
> > > What I'd really like to do is write tests for the OpenSRF methods I
> > > created that simulate as closely as possible the requests made by the
> > > javascript to the OpenSRF backend, so that I can make sure I cover all
> > > the possible use cases, get expected responses, and be able to rerun
> > > the tests whenever any changes are made.
> > >
> > > My tests would do all the things normal things tests do: seed the
> > > database with test data, execute the methods with some mock objects,
> > > and compare the responses to other mock objects, then delete the test
> > > data from the database.
> > >
> > > Where would be the best place to put such tests in the source tree?
> >
> > For functional verification tests like this that would require a
> > complete running system, I think a subdirectory under Open-ILS/tests
> > would be perfectly appropriate. If you need seed data for bib records,
> > copies, call numbers, located URIs, monograph parts, and conjoined items
> > you might find Open-ILS/tests/datasets/concerto.sql useful. Sounds like
> > that's not the focus of your current efforts, but perhaps a similar
> > approach would be useful for seeding the data you need - particularly if
> > you need to create "historical" data such as past circulation history,
> > etc, that might not be as easy to create using strict OpenSRF API calls.
> >
>
> Outside the (unfortunately, yes) minimal in-EG tests, there's also the
> Constrictor project. Bill Erickson built this specifically for API testing
> and benchmarking. It's driven by relatively simple configuration files,
> provides full-stack testing with expected result comparison, and measures
> various timing components of each test. It has the added benefit of being
> able to control a cluster of test-running clients to simulate load for
> those parts of the code that are load-sensitive, such as optional database
> replication, process-local caching and transaction control.
>
> I'll have to defer to Bill on the current whereabouts of a Constrictor
> repo, though, as even the readonly svn repo from before the age of git
> seems to be missing.
http://svn.open-ils.org/trac/ILS-Contrib/wiki/Constrictor - the
Subversion repo checkout instructions there still work for me.
Drawbacks that come to mind may be that you're dealing with Python, and
nothing else really uses Python in the current Evergreen / OpenSRF
stack, and the constrictor framework isn't integrated in any way with
the core repo, so as APIs and data structures change the constrictor
framework can get out of sync. IIRC from the times I've run constrictor,
the data set that constrictor uses is also up to the user and the user
is responsible for setup and teardown? Don't get me wrong, constrictor
does some great things, but I think those are more on the stress-testing
side than on regression testing.
Being able to run standardized tests after ./autogen.sh && configure &&
make && make install && setting up a clean database schema &&
osrf_ctl.sh && autogen.sh (assuming preexisting opensrf_core.xml /
opensrf.xml / ejabberd setup - geez, we really do that to people?) via a
simple "prove" command would be awesome. I'm not sure constrictor is
setup to fit into a TAP harness but maybe it could work too.
More information about the Open-ils-dev
mailing list