[jdom-interest] Performance regressions in JDOM

Vojtech Horky horky at d3s.mff.cuni.cz
Tue Jun 9 23:32:14 PDT 2015

Hello all.

TL;DR version of this e-mail is: at our research group we run some tests 
scanning for performance regressions in JDOM. We mostly focused on 
finding out whether assumptions stated in commit messages (such as "it 
should be faster and consume fewer resources") were actually met and 
whether it is possible to capture them in automatically testable way.

Now, we would like to know whether someone would be interested in the 
results; whether we should polish the tests to be able to move them to 
contrib/ or (better) whether someone would like to push this further.

If you are interested in this, here is the full version.

Our aim is to allow developers create performance unit tests, i.e. 
something to test performance of individual methods. For example, if you 
(think that you have) improved performance of a certain method, it would 
be nice if you could test that. Of course, you can run tests manually 
but that is time consuming and rather inconvenient in general. So our 
tool (approach) allows you to capture such assumption in a Java 
annotation and our program - still a prototype, but working pretty well 
- scans these annotations and runs the tests automatically, reporting 
violations. In this sense, it is like a performance equivalent of an 
assertion of a unit test.

As a simple example, if we want to capture an assumption stating that 
SAXBuilder.build() function is faster in version (commit) 6a49ef6 than 
in 4e27535, we would put in the annotation the following string:

SAXBuilder#build @ 6a49ef6 < SAXBuilder#build @ 4e27535

and the tool would handle the rest. Well, more or less.

Regarding JDOM, we went through its commits and identified almost 50 of 
them that we found interesting. Interesting commits were those that 
mentioned that they improved performance or when the commit was a 
"refactoring" one. We measured mostly SAXBuilder.build() and several 
methods from the Verifier class.

In the end, we found out that we were able to confirm lot of the 
assumptions about performance but there were also cases where the 
assumptions were not met, i.e. the developer thought that the commit 
improved performance while the opposite was true.

We published our results in a paper [1] (PDF available as well [2]); 
detailed results are available on-line [3].

Right now, the tests themselves are in a separate repository [4] and the 
setup is rather complicated. However, if someone would find this 
interesting and potentially useful, we would gladly refactor the tests 
to fit the contrib/ structure and prepare a fork to be merged.

- Vojtech Horky

[1] http://dx.doi.org/10.1007/978-3-642-40725-3_12
[2] http://d3s.mff.cuni.cz/~horky/papers/epew2013.pdf
[3] http://d3s.mff.cuni.cz/software/spl/#jdom-case-study
[4] https://sf.net/p/spl-tools/casestudy/ci/master/tree/

More information about the jdom-interest mailing list