We have been making great progress on extending our native evaluation of SPARQL. Native evaluation can have tremendous performance benefits when compared to Sesame evaluation. While there are still some edges of the SPARQL Recommendation where we are working to support full native evaluation, there is enough in place to run several benchmarks, including BSBM. Until now BSBM query 3 was being delegated to Sesame due to an optional join group construction. This was dragging down the overall performance tremendously. All queries with optional join groups now evaluate natively. This boosts performance significantly and means that we can start quoting official numbers for the BSBM benchmark.

Without further ado, here are some results for BSBM:

BSBM 100M, cold disk, cold jvm: 7724.14 query mixes per hour
BSBM 100M, hot disk, hot jvm: 12235.66 query mixes per hour
BSBM 200M, cold disk, cold jvm: 7707.27 query mixes per hour
BSBM 200M, hot disk, hot jvm: 10803.86 query mixes per hour

The metric reported by BSBM is query mixes per hour (QMpH). Higher is better. Examining the table above you can see that the hot disk runs have a higher throughput, but the scale of the database has very little effect on the query performance in these trials (100M versus 200M triples). The BSBM reduced query mix excludes queries 5 and 6. The static optimizer gets a bad join ordering for Query 5, but the new runtime optimizer (not yet in production) produces a query plan which is nearly twice as fast. Query 6 includes a regular expression. Bigdata currently evaluates the REGEX operator as a filter on the solutions, which makes it relatively expensive.

The database size for BSBM with 100M triples was 23G (247 bytes per triple) with a load rate of 24830 triples per second (tps). With 200M triples it was 45G (242 bytes per triple) with a load rate of 13340 tps. These “bytes per triple” numbers include all bytes in the backing store, including the dictionary (URIs and Literals). BSBM uses some very large literals, which gives it somewhat “fatter” triples when compared to any real world data sets. This also accounts for the slow down in the data load between BSBM 100M and BSBM 200M. Since bigdata is storing the keys and values in the B+Tree leaves, these large literals drive IO writes. This effect is larger as the data set becomes larger. In the future we plan to use a hash index to address this problem by storing large keys and large values outside of the page.

The trials quoted are using a random seed and 8 clients using 50 warmup trials and 500 presentations of the query mixes. The database was the bigdata RWStore running on a single machine. These results were obtained against the QUADS_QUERY_BRANCH from SVN r4102. The “cold disk/cold jvm” conditions were created by dropping the file system cache and starting a new server process. The hot disk/hot jvm conditions were created using a 2nd run with a different random seed. The machine is a quad core AMD Phenom II X4 with 8MB Cache @ 3Ghz running Centos with 16G of RAM and a striped RAID array with 6x SAS disks with 15k spindles (Seagate Cheetah with 16MV Cache, 3.5″). IO utilization was nearly 98% during the cold disk runs and nearly 97% during the hot runs. CPU utilization was 20% during the cold disk runs and 25% during the hot disk runs. The JVM was Oracle Java 1.6.0_17 using “-server -Xmx4g -XX:+UseParallelOldGC”. The Java process size was approximately 1.6G during benchmark runs.

When the BSBM “rampup” policy is followed, the performance of the database is significantly higher:


BSBM 100M reduced query mix after rampup: 28726 QMpH
BSBM 200M reduced query mix after rampup: 28301 QMpH

These numbers represent the performance of the database once it has reached a fully hot state.