Friday, May 17, 2013

Cassandra as a Deep Storage Mechanism for Druid Real-Time Analytics Engine!


As I mentioned in previous posts, we've been evaluating real-time analytics engines.  Our short list included: vertica, infobright, and acunu.  You can read about the initial evaluation here.

Fortunately, during that evaluation, I bumped into Eric Tschetter at the phenomenally awesome Philly Emerging Technologies Event (ETE).  Eric is lead architect at MetaMarkets and heads up the Druid project

From their white paper:
"Druid is an open source, real-time analytical data store that supports fast ad-hoc queries on large-scale data sets. The system combines a column-oriented data layout, a shared-nothing architecture, and an advanced indexing structure to allow for the arbitrary exploration of billion-row tables with sub-second latencies. Druid scales horizontally and is the core engine of the Metamarkets data analytics platform. "
http://static.druid.io/docs/druid.pdf

At a high-level, Druid collects event data into segments via real-time nodes.  The real-time nodes push those segments into deep storage.  Then a master node distributes those segments to compute nodes, which are capable of servicing queries.  A broker node sits in front of everything and distributes queries to the right compute nodes.  (See the diagram)

Out of the box, Druid had support for S3 and HDFS.   That's great, but we are a Cassandra shop. =)

Fortunately, Eric keeps a clean code-base (much like C*).  With a little elbow grease, I was able to implement a few interfaces and plug in Cassandra as a deep storage mechanism!    From a technical perspective, the integration was fairly straightforward.   One interesting challenge was the size of the segments.  Segments can be gigabytes in size.  Storing that blob in a single cell in Cassandra would limit the throughput of a write/fetch.

With a bit of googling, I stumbled on Astyanax's Chunked Object storage.  Even though we use Astyanax extensively at HMS,  we had never had the need for Chunked Object storage. (At HMS, we don't store binary blobs)  But Chunked Object Storage fits the bill perfectly!  Using Chunked Object storage, Astyanax multithreads the reads/writes.  Chunked Object Storage also spreads the blob across multiple rows, which means the read/write gets balanced across the cluster.  Astyanax FTW!

I submitted the integration to the main Druid code-base and it's been merged into master. (tnx fjy!)

Find getting started instructions here:
https://github.com/metamx/druid/tree/master/examples/cassandra

I'm eager to hear feedback.  Sp, please let me know if you run into any issues.
@boneill42


5 comments:

sharrissf said...

Great stuff! I know that for you guys, since you already use Cassandra, it was an easy decision to stick with it but I would be curious to know how it's performance compares to other storage systems (HDFS, S3, Riak etc). Might be interesting to do a compare.

iram akram said...
This comment has been removed by a blog administrator.
Rob said...

Thanks for all the great posts. I just had a couple of questions.

What throughput are you getting?
Are you seeing an improvement on the tests you ran?

architectstone said...

Yeah its true that architectural column is very structurally used. I want to use it for building blocks so please suggest me. If you want to take some features over architectural columns then just go through it.

anot arnold said...

Excellent posts to read keep it up and keep going on this way. And keep sharing these types of things Thanks and I read your article and I keep reading your content.. It’s very interesting..
edmonton storage