Wednesday, May 22, 2013

Big Data Overview and Cassandra Plunge at Philly JUG


Thanks everyone for coming out last night.  We plowed through a lot of material.

I posted the slides here:
http://www.slideshare.net/boneill42/big-data-phillyjug

Please feel free to ping me directly if you questions. (@boneill42)

Monday, May 20, 2013

C* and the (Big Data Quadfecta)++ @ Philly JUG tomorrow (5/21)


I"m looking forward to presenting on our Big Data platform at the Philly JUG tomorrow.   I hope to give a high-level overview of our use case, with a deep dive into Cassandra, and an architectural overview of our Big Data Quadfecta.

I may even touch on the more recent Storm + C* + Druid integration I have in a proof-of-concept.

What comes after quadfecta anyway? =)

Friday, May 17, 2013

Cassandra as a Deep Storage Mechanism for Druid Real-Time Analytics Engine!


As I mentioned in previous posts, we've been evaluating real-time analytics engines.  Our short list included: vertica, infobright, and acunu.  You can read about the initial evaluation here.

Fortunately, during that evaluation, I bumped into Eric Tschetter at the phenomenally awesome Philly Emerging Technologies Event (ETE).  Eric is lead architect at MetaMarkets and heads up the Druid project

From their white paper:
"Druid is an open source, real-time analytical data store that supports fast ad-hoc queries on large-scale data sets. The system combines a column-oriented data layout, a shared-nothing architecture, and an advanced indexing structure to allow for the arbitrary exploration of billion-row tables with sub-second latencies. Druid scales horizontally and is the core engine of the Metamarkets data analytics platform. "
http://static.druid.io/docs/druid.pdf

At a high-level, Druid collects event data into segments via real-time nodes.  The real-time nodes push those segments into deep storage.  Then a master node distributes those segments to compute nodes, which are capable of servicing queries.  A broker node sits in front of everything and distributes queries to the right compute nodes.  (See the diagram)

Out of the box, Druid had support for S3 and HDFS.   That's great, but we are a Cassandra shop. =)

Fortunately, Eric keeps a clean code-base (much like C*).  With a little elbow grease, I was able to implement a few interfaces and plug in Cassandra as a deep storage mechanism!    From a technical perspective, the integration was fairly straightforward.   One interesting challenge was the size of the segments.  Segments can be gigabytes in size.  Storing that blob in a single cell in Cassandra would limit the throughput of a write/fetch.

With a bit of googling, I stumbled on Astyanax's Chunked Object storage.  Even though we use Astyanax extensively at HMS,  we had never had the need for Chunked Object storage. (At HMS, we don't store binary blobs)  But Chunked Object Storage fits the bill perfectly!  Using Chunked Object storage, Astyanax multithreads the reads/writes.  Chunked Object Storage also spreads the blob across multiple rows, which means the read/write gets balanced across the cluster.  Astyanax FTW!

I submitted the integration to the main Druid code-base and it's been merged into master. (tnx fjy!)

Find getting started instructions here:
https://github.com/metamx/druid/tree/master/examples/cassandra

I'm eager to hear feedback.  Sp, please let me know if you run into any issues.
@boneill42