Each night, an automated Python tool checks out the Lucene/Solr trunk source code and runs multiple benchmarks: indexing the entire Wikipedia English export
three times (with different settings / document sizes); running a
near-real-time latency test; running a set of "hardish" auto-generated
queries and tasks. The tests take around 2.5 hours to run, and the
results are verified against the previous run and then added to the
graphs linked below.
The goal is to spot any long-term regressions
(or, gains!) in Lucene's performance that might otherwise accidentally
slip past the committers, hopefully avoiding the fate of the boiling frog.
See more details in http://people.apache.org/~mikemccand/lucenebench/
No comments:
Post a Comment