[OPEN-ILS-DEV] Testing Evergreen

Nathanael Schilling nathanaelschilling at gmx.net
Sun Oct 18 13:43:19 EDT 2009


Hello.
This sounds really interresting, but I'm somewhat confused as to what you mean 
with testing. Do you mean testing that all the opensrf functions behave as 
expected, or testing that everything compiles fine, or some other form of 
testing? (I missed large parts of the IRC meeting)
Nathanael Schilling


On Saturday 17 October 2009 09:47:33 pm Shawn Boyette wrote:
> Hello, all. This message, per yesterday's dev meeting in IRC, is about
> my past efforts in adding a testing suite to OpenSRF, and the direction
> I was trying to go with it.
>
> One of the first things I did when I was hired at ESI was to change the
> way OpenSRF's Perl modules were handled during install. At the time,
> they were simply copied from the osrf tree into the /openils tree, and
> that was that (OpenILS's modules are still done this way). I laid down a
> CPAN-style build on them with Module::Starter, so that they would be
> moved into @INC instead.
>
> I got a skeletal testing suite for free with this, and I extended it
> ever so slightly to have a use_ok() test (see Test::More in your perldoc
> or on search.cpan.org if you're unfamiliar) for each module so that, at
> the least, we could be assured that everything *compiled*.
>
> The next thing I did (after a bit of a hiatus) was to tweak things such
> that the Perl module tests would run from a top-level "make check",
> instead of during the "make" phase. You get this make target for free
> with the automake system, and it's where all tests should be hooked in
> the future.
>
> I'm personally more used to "make test", but to do that we'd  have to
> define a null target named "test" in every Makefile.am in the system
> that doesn't have actual tests, or the build will die. So I'm very much
> in favor of using the GNU-standard "check", which happens when you have
> a target with that name in Makefile.am, and is just doesn't happen when
> you don't.
>
> I then set out to build up the Perl modules' test suite. OpenSRF
> currently exists as 2 parallel implementations: Perl and C. The Perl
> side is the complete "reference" imlpementation, and the C side is a
> partial, for-speed reimplementation. I speak both languages, but am more
> of a Perl programmer than a C programmer, so from that, from the Perl
> implementation being the declared reference, and from the richness of
> Perl's testing environment, I decided to work there first.
>
> My strategy was simple: start with the "base" modules; those which all
> the higher-level modules depended on, and which had no internal
> dependencies themselves. As the botom layer was exhaustively tested,
> work would move "up the stack", with the next round of testing resting
> on a base which had been proven, so far as was possible, to behave
> correctly.
>
> This turned out to be impossible because my assumtion about the
> architecture of OpenSRF was badly flawed. The namespace structure of the
> modules does not reflect their architectural structure and dependencies.
> The osrf internal dependency graph actually looks like this:
>
> http://open-ils.org/~sboyette/osrfdeps.png
>
> which, initially, I couldn't figure out what to do with. I picked
> O::U::JSON and pulled it up to 100% test coverage because it was one of
> the dangling modules with no internal dependencies. To my mind, it is
> senseless to start testing in the middle or at the top of a dependency
> graph, because you haven't yet proven that underlying code is behaving
> as expected -- you can't simply trust that everything is OK and write
> tests which enshrine current behavior, unless you are perfect in all
> respects. At best you'll have to back up and rewrite tests as you expose
> and fix bugs. At worst you'll build a test suite which is guaranteed to
> deliver broken software.
>
> That said, the only plan I have been able to come up with involves doing
> exactly that -- and then turning around and tearing it down.
>
> Assuming that we want OpenSRF to be testable and provable, it must be
> refactored -- but to be safely refactored, there must be a test suite,
> so we can know that the refactored code behaves as the old code did.
> Making OpenSRF correct will therefore be a two-phase process.
>
> The first phase is writing a test suite which, basically, only does
> tests at the subroutine/module level. That is, it simply tests for an
> expected output for a given input. "Internals" testing, the ones which
> use implementation-level knowledge of the code to prove that we get the
> right answer for the right reasons, will not be written at this point.
>
> Once we have a scaffold of test which covers the behavior of the current
> osrf stack, refactoring begins. A refactoring of this magnitude will
> basically be a rewrite. Module names and structure will change, so the
> scaffolding tests will get dragged around, filesystem-wise, but the
> important thing is that the tests remain attached to the code which is
> performing the task whose behavior they check.
>
> The second phase is rewriting the test suite from "behavior" to "proof
> adn correctness" testing. This might be a true, discrete phase or it
> might happen in parallel, as refactoring of individual components
> settles into stability.
>
> This is my plan, and this is where I had planned to devote my time. The
> C side of things is no less important, but I had assumed I would be
> working largely alone, so it has received much less consideration at
> this time. I am also largely ignorant of testing infrastructure in C. I
> know how to write unit tests, but I don't know of the existance of
> higher level things like test harnesses and coverage tools like
> Devel::Cover. If anyone knows these things, or even better, would like
> to go ahead and work on C testing, I would welcome it :)
>
> That's all I have for now. I'll be back later with info on the Buildbot
> and other stuffs.


More information about the Open-ils-dev mailing list