[OPEN-ILS-GENERAL] RELEASE TEAM: The beginning!

Dan Scott dan at coffeecode.net
Thu Jan 27 00:02:56 EST 2011


On Thu, Jan 27, 2011 at 01:20:20PM +1000, Joel Harbottle wrote:
> Hi All,
> 
> From what I have read on the webpage from the link in the email sent to the
> list (below). 
> 
> I'm interested in being one of the 'Testers'. Who would I have to talk to
> become a tester, also taking in note, that I'm in Australia.

Hi Joel,

Another member of the Commonwealth is always welcome!

The 'Testers' group is still in a very fledgling state as far as
organization goes, so it's a great time to jump in. One of our project's
current weaknesses is a lack of formal testing on both the automated and
manual fronts. We're not the first project in existence to have faced
that hurdle, so perhaps we could adopt the approach of the Firefox
community and their very practical results-oriented "Litmus" testing
project.

In a nutshell, the Mozilla QA team defines a set of test cases to cover
as much of the expected behaviour of the browser as possible at three
levels:

  * Full functional tests - every function the browser could perform,
    run before major releases
  * Basic functional tests - subset of "full", run before minor
    releases
  * Smoketests - subset of "basic", run on a nightly basis to catch
    regressions

Each test case has a title, a set of steps to follow, and an expected
result
(http://quality.mozilla.org/docs/litmus/test-case-writing-primer/).
The "Litmus" tool the Mozilla QA team developed is a Web UI over a
database of these test cases with the ability to record the results
reported for each test case as the tester walks through them
(Pass/Fail/Unclear). The idea is that "Fail" reports are red flags, and
"Unclear" reports are requests for an edit of the test case to clarify
steps or expected results. You can see Litmus at
https://litmus.mozilla.org/ - check out "reporting - test runs" on the
left column and drill down through a smoketest report to get a feel.

Over time, the Mozilla QA team has been automating these tests (as you
would expect) using tools like MozMill. But the human testers have been
critical for both creating the test cases and for running the manual
test cases until such time as automation can relieve them of that duty
and enable them to move on to other valuable efforts.

So, time to wrap up an overly long email. I would suggest that the
testers might want to first focus on defining test cases for broad
coverage of Evergreen's functionality (cataloguing, circulation,
searching, serials, acquisitions, reports, etc) - perhaps using
http://evergreen-ils.org/dokuwiki/doku.php?id=qa:eg_test_cases as a
jumping-off point - and send results from running through those
test cases to the "RELEASE TEAM" mailing list (and/or recording test
runs in the qa: namespace of the wiki) as a significant contribution to
the Release QA activities. 

Further down the road, if there's interest, we could look at adopting
Litmus or a tool like that to help corral test cases & results for
different Evergreen versions on different server and client platforms,
and we could look at adopting MozMill (or something like it) to
eventually automate some of those test cases.

What do you think? I hope to heck that I haven't scared you or other
potential testers off by expanding on what's just a single line in the
Release Checklist!

Dan


More information about the Open-ils-general mailing list