[OPEN-ILS-DEV] Integrating automated code analysis tools into regular practice?

Dan Scott denials at gmail.com
Wed Nov 21 22:03:22 EST 2007


I was pondering a few possibilities for helping to tighten up our code
in a more automated fashion today. I thought I would throw the ideas
out there to see if there's interest or support for them...

1) Static analysis of C and Java code. http://scan.coverity.com/
offers free static analysis scans of open source projects using C and
Java to help detect possible problems. See
http://scan.coverity.com/devfaq.html for the requirements to be added
to the program.

2) Perl::Critic for warning about violations of Damian's "best
practices". I ran this tonight asking it to only warn about the most
severe (level 5) violations; and although there does seem to be some
noise (is explicitly returning undef really one of the most severe
violations in Perl programming?), on the whole I can buy into its
arguments. We could make it a goal to gradually work our way down to,
say, a clean level 3 run of Perl::Critic...

3) pychecker or pylint for Python coding practices -- although these
seem somewhat less robust and harder to automate; pychecker actually
needs to run the program, and pylint apparently wants to run as an
Eclipse plugin.

Explicitly checking for security vulnerabilities, we might consider
rats (http://www.fortifysoftware.com/security-resources/rats.jsp).
Although the signal-to-noise seems relatively high, it is worth taking
a look at the output; there are some suggestions that would be worth
implementing, I believe.

It would be pretty straightforward to have all of these tools run
against OpenSRF, Evergreen, and Woodchip trunk and throw the output
onto a Web page that gives us a snapshot of our progress, possibly
highlighting any differences from day-to-day to point out possible
regressions.

-- 
Dan Scott
Laurentian University


More information about the Open-ils-dev mailing list