Google’s bigtable architecture gives you scale-out schema flexible ordered data with high write concurrency. This is great, but is it what you need?
RDF provides not only schema flexibility, but the ability to dynamically federate and align data from a variety of sources (ad hoc reuse of data) together with high-level query (SPARQL). In addition, the bigdata RDF database supports datum-level provenance (woefully missing in the RDF stack). That sounds good, but it comes with a price.
There are tradeoffs when choosing a data model between write concurrency and expressivity. In order to get the benefits of RDF, you need to maintain multiple indices over the data (efficient access paths for JOINs) and you need to accept lower write concurrency in order to pre-compute aspects of the entailments (a tradeoff between costs of writers and readers).
Reads are performed against historical commit points, so you have full read-concurrency. Reads can be extremely efficient as well – an access path scan for the properties of some subject is every bit as efficient as a read on a sparse row store like bigtable or HBase. However, since SPARQL supports conjunctive query, reads can also be more interesting involving multiple JOINs on the data.
RDF has write concurrency only slightly worse than bigtable if you choose your read behind commit points to correspond to coherent database states. However, maintaining entailments using eager closure imposes strong write concurrency limits on an RDF database. The basic problem is that closure must be computed against a consistent database state, so concurrent closure computations are not allowed. There are several ways to handle this. One is to use query-time inference, e.g., magic sets, in which case all costs are paid when queries are answered. Another is to collect and batch writes, which works well with a map/reduce style concurrent data loader and sites with high volume updates that can accept some delay between the time when the data are submitted and the time when the closure of the database is updated.
The most interesting aspect of an RDF database is using owl:equivalentClass, owl:equivalentProperty, and owl:sameAs to align classes properties from your ontology (aka schema) and instances from your data (aka rows). These predicates provide a declarative mechanism to semantically align federated datasets making RDF perfect for ad-hoc reuse and repurposing of data, what I think of as RDF mashups.
Many open source projects dealing with cloud computing are knockoffs of systems published on by Google (this also appears to be true of commercial businesses trying to compete in the cloud as service space). To my mind, this lack of innovation is problematic. The systems (GFS, map/reduce and bigtable) that Google developed are fairly specialized and are designed for specific problem classes.
However, I did not see much awareness among the potential users of these systems of the tradeoffs that had been considered and accepted by Google in their development. For example, map/reduce is suited to non-transactional, high-latency processing where there is good locality in the inputs while Google’s bigtable (a schema flexible row store) is suited to high concurrency writers (hence the limited ACID guarantee for only a single “logical” row of data), ordered key-range scans, and no secondary indices, no JOINs, and no high-level query). However, people seem to feel that these are general purpose tools that will let them scale-out without further reflection on their data processing problem.
While these systems are gaining popularity right now, I foresee some failed expectations in the next 2-3 years precisely because they are being applied without much consideration to their “fit” for a given problem. This reminds me of the time that I recoded a machine learning algorithm written in C and running on a PC to execute in Fortran on an IBM 3090. We got better performance on the PC.
It is a fair question to ask how bigdata differs. After all, bigdata is at its core a scale-out B+Tree architecture, which is the same architecture underlying Google’s bigtable. One way in which bigdata is different is that features such as the distributed file system are built on top of the database architecture rather than the other way around. This provides the same ACID contract for batch updates on an index partition to the distributed file system, the flexible row store, the RDF database, etc. Another benefit of this approach is that we get ACID operations, including file append, on file blocks (chunks of up to 64M each) for “free” from the underlying database semantics.
However, the broader difference is that bigdata is designed to allow for the construction of a variety of data models, each suitable to a different problem space. For example, there is a sparse row store (akin to Google’s bigtable, HBase, or hypertable), but there are also other data models, including an RDF database with high-level query and inference, an object database, and the distributed file system as an application of the basic scale-out B+Tree system rather than a substrate for it. While we have not pursued a relation data model, one could. Scale-out can be applied in all of these contexts.
There are tradeoffs in the design space. Typically, you find that you have to trade off (potential) writer concurrency against model complexity. This is because in the more complex models, a writer can have non-local effects (the effects are not restricted to a single key range index partition). The tradeoffs you accept depend on your requirements. Google’s bigtable is an example of one such tradeoff. It accepts limited isolation (a single logical row at a time) in exchange for very high read/write concurrency.
Bryan Thompson presented on “Cloud Computing with bigdata” at OSCON 2008. The presentation gave an overview of cloud computing, the bigdata architecture and how it relates to similar systems, why RDF is an interesting technology (fluid schema and declarative schema and instance alignment make for great mashups and facilitates data reuse), some of the additional challenges that need to be addressed for scale-out on RDF (secondary indices and distributed JOINs to support high-level query), and some of the work we have done on scale-out for RDF processing, including statement level provenance and combining map/reduce processing with indexed data. See http://bigdata.sourceforge.net/pubs/bigdata-oscon-7-23-08.pdf for a copy of the slides.
bigdata(R) is a scale-out storage and computing fabric supporting optional transactions, very high concurrency, and very high aggregate IO rates. For more information, see https://sourceforge.net/projects/bigdata/, http://bigdata.sourceforge.net/docs/, and http://www.bigdata.com.