Thursday, May 14, 2015

Spark SQL against Cassandra Example


Spark SQL is awesome.  It allows you to query any Resilient Distributed Dataset (RDD) using SQL.  (including data stored in Cassandra!)

First thing to do is to create a SQLContext from your SparkContext.  I'm using Java so...
(sorry -- i'm still not hip enough for Scala)

        JavaSparkContext context = new JavaSparkContext(conf);
        JavaSQLContext sqlContext = new JavaSQLContext(context);


Now you have a SQLContext, but you have no data.  Go ahead and create an RDD, just like you would in regular Spark:


        JavaPairRDD<Integer, Product> productsRDD = 
            javaFunctions(context).cassandraTable("test_keyspace", "products",
                productReader).keyBy(new Function<Product, Integer>() {
            @Override
            public Integer call(Product product) throws Exception {
                return product.getId();
            }
        });

(The example above comes from the spark-on-cassandra-quickstart project, as described in my previous post.)

Now that we have a plain vanilla RDD,  we need to spice it up with a schema, and let the sqlContext know about it.  We can do that with the following lines:

        JavaSchemaRDD schemaRDD =   sqlContext.applySchema(productsRDD.values(), Product.class);        
        sqlContext.registerRDDAsTable(schemaRDD, "products");   

Shazam.  Now your sqlContext is ready for querying.  Notice that it inferred the schema from the Java bean. (Product.class).  (Next blog post, I'll show how to do this dynamically)

You can prime the pump with a:

        System.out.println("Total Records = [" + productsRDD.count() + "]");

The count operation forces Spark to load the data into memory, which makes queries like the following lightning fast:

        JavaSchemaRDD result = sqlContext.sql("SELECT id from products WHERE price < 0.50");
        for (Row row : result.collect()){
            System.out.println(row);
        }

That's it.  Your off to the SQL races.


P.S.  If you try querying the sqlContext without applying a schema and/or without registering the RDD as a table, you may see something similar to this:

Exception in thread "main" org.apache.spark.sql.catalyst.errors.package$TreeNodeException: Unresolved attributes: 'id, tree:
'Project ['id]
 'Filter ('price < 0.5)
  NoRelation$

Monday, May 11, 2015

Streaming data into HPCC using Java

High Performance Computing Cluster (HPCC) is a distributed processing framework akin to Hadoop, except that it runs programs written in its own Domain Specific Language (DSL) called Enterprise Control Language (ECL).   ECL is great, but occasionally you will want to call out to perform heavy lifting in other languages.  For example, you may want to leverage an NLP library written in Java.

Additionally, HPCC typically operates against data residing on filesystems akin to HDFS.  And just like with HDFS, once you move beyond log file processing and static data snapshots, you quickly develop a desire for a database backend.

In fact, I'd say this is a general industry trend: HDFS->HBase, S3->Redshift, etc.    Eventually, you want to decrease the latency of analytics (to near zero).  To do this, you setup some sort of distributed database, capable of supporting both batch processing as well as data streaming/micro-batching.  And you adopt an immutable/incremental approach to data storage, which allows you to collapse your infrastructure and stream data into the system as it is being analyzed.  (simplifying everything int he process)

But I digress, as a step in that direction...

We can leverage the Java Integration capabilities within HPCC to support User Defined Functions in Java.  Likewise, we can leverage the same facilities to add additional backend storage mechanisms (e.g. Cassandra).  More specifically, let's have a look at the streaming capabilities of HPCC/Java integration to get data out of an external source.

Let's first look at vanilla Java integration.

If you have an HPCC environment setup, the java integration starts with the /opt/HPCCSystems/classes path.  You can drop classes and jar files into that location, and the funcions will be available from ECL.  Follow this page for instructions.

If you run into issues, go through the troubleshooting guide on that page.  The hardest part is getting HPCC to find your classes.  For me, I ran into a nasty jdk version issue.  By default, HPCC was picking up an old JDK version on my Ubuntu machine.  Since it was using an old version, HPCC could not find the classes compiled with the "new" JDK(1.7), which resulted in the cryptic message, "Failed to resolve class name".  If you run into this, pull the patch I submitted to fix this for Ubuntu.

Once you have that working, you will be able to call the Java from ECL using the following syntax:

IMPORT java;
integer add1(integer val) := IMPORT(java, 'JavaCat.add1:(I)I');
output(add1(10));

This is pretty neat, and as the documentation suggests, you can return XML from the Java method if the data is complex. But what do you do if you have a TON of data, more than can reside in memory? Well, then you need Java streaming to HPCC. ;)

Instead of returning the actual data from the imported method, we return a java Iterator. HPCC then uses the Iterator to construct a dataset. The following is an example Iterator.


import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;

public class DataStream implements Iterator {
    private int position = 0;
    private int size = 5;

    public static Iterator stream(String foo, String bar){
        return new DataStream();
    }

    @Override
    public boolean hasNext() {
        position++;
        return (position < size);
    }

    @Override
    public Row next() {
        return new Row("row");
    }

    @Override
    public void remove() {
    }

}

This is a standard Iterator, but notice that it returns a Row object, which is defined as this:


import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;

public class Row {
    private String value;
    public Row(String value){
       this.value = value;
    }
}

The object is a java bean.  HPCC will set the values of the member variables as it maps into the DATASET.  To see exactly how this happens, let's look at the ECL code:

IMPORT java;
rowrec := record
  string value;
end;
DATASET(rowrec) stream() := IMPORT(java, 'DataStream.stream:(Ljava/lang/String;Ljava/lang/String;)Ljava/util/Iterator;');
output(stream());

After the import statement, we define a type of record called rowrec.  In the following line, we import the UDF, and type the result as a DATASET that contains rowrecs.  The names of the fields in rowrec must match the names of the member variables on the java bean.  HPCC will use the iterator, and populate the dataset with the return of the next() method.  The final line of the ECL outputs the results returned.

I've committed all of the above code to a github repository with some instructions on getting it running.  Have fun.

Stay tuned for more...
Imagine combining the java streaming capabilities outlined here,  with the ability to stream data out out of Cassandra as detailed in my previous post.  The result is a powerful means of running batch analytics using Thor, against data stored in Cassandra (with data locality!)...  (possibly enabling ECL jobs against data ingested via live real-time event streams! =)



Data Locality w/ Cassandra : How to scan the local token range of a table...


I'm working on a mechanism that will allow HPCC to access data stored in Cassandra with data locality, leveraging the Java streaming capabilities from HPCC (more on this in a followup post). More specifically, we want to allow people to write functions in ECL that will execute on all nodes in an HPCC cluster, using collocated Cassandra instances as the source of their data.

To do this however, we need a couple things.   If you remember, Cassandra data is spread across multiple nodes using a token ring.  Each host is assigned one or more slices of that token ring.  Each slice has a token range, with a start and an end.  Each partition/row key hashes into a token, which determines which node gets that data.  (Typically, Murmur3 is used as the hashing algorithm.)

Thus, to scan over the data that is local to a node, we need to determine the token ranges for our local node, then we need to page over the data in that range.

First, let's look at how we determine the ranges for the local node.  You could query the system tables directly: (SELECT * FROM system.local), but if you are using a connection pool (via the Java driver), it is unclear which node will receive that query. You could also query the information from the system.peers table using your IP address in the where clause, but you may not want to configure that IP address on each node.  Instead, I was able to lean on the CQL java-driver to determine the localhost:

    public Host getLocalHost(Metadata metadata, LoadBalancingPolicy policy) {
        Set<Host> allHosts = metadata.getAllHosts();
        StringBuilder s = new StringBuilder();
        Host localHost = null;
        for (Host host : allHosts) {
            if (policy.distance(host) == HostDistance.LOCAL) {
                localHost = host;
                break;
            }
        }
        return localHost;
    }

With a Host in hand, the java-driver makes it easy to get the Token Ranges:

 Cluster cluster = Cluster.builder().addContactPoints(host).withPort(port).withLoadBalancingPolicy(policy).build();
        Metadata metadata = cluster.getMetadata();
        Host localhost = getLocalHost(metadata, policy);
        tokenRanges = unwrapTokenRanges(metadata.getTokenRanges(keyspace, localhost)).toArray(new TokenRange[0]);


The code is very straightforward, with the exception of the call to unwrapTokenRanges.   When you ask for the token ranges for a host, CQL will give you the tokens in ranges, but CQL does NOT handle wrapped ranges.  For example, let's assume we had a global token space of [-16...16].  Our host may have token ranges of [-10...-3], [12...14] and [15...-2].  You can issue the following CQL queries (Notice that token ranges are start exclusive and end inclusive.):

SELECT token(id), id, name FROM test_table WHERE token(id)>-10 AND token(id)<=-3;
SELECT token(id), id, name FROM test_table WHERE token(id)>12 AND token(id)<=14;

However, you CANNOT issue the following CQL:

SELECT token(id), id, name FROM test_table WHERE token(id)>15 AND token(id)<=-2;

That range wraps around.  To accommodate this, the java-driver provides a convenience method called unwrap().  You can then use that method, to create a set of token ranges usable in CQL queries that will account for the token range wrapping.


    Set<TokenRange> unwrapTokenRanges(Set<TokenRange> wrappedRanges) {
        HashSet<TokenRange> tokenRanges = new HashSet<TokenRange>();
        for (TokenRange tokenRange : wrappedRanges) {
            tokenRanges.addAll(tokenRange.unwrap());
        }
        return tokenRanges;
    }

Finally, we need to be able to page over the data.   Fortunately, with the 2.0 release of the java-driver, we are able to do this in a few lines of code:

     session = cluster.connect();  
     Statement select = QueryBuilder.select(columns.split(",")).from(table).where(gt(token(partitionKey), range.getStart().getValue()))
                        .and(lte(token(partitionKey), range.getEnd().getValue()));
     stmt.setFetchSize(pageSize);  
     resultSet = session.execute(stmt);   
     iterator = resultSet.iterator();    
     while (!resultSet.isFullyFetched()) {  
       resultSet.fetchMoreResults();  
       Row row = iterator.next();  
       System.out.println(row);  
     }  

The above code issues a select statement and pages over the results, scanning the portion of the table specified within the token range.

 If we throw a loop on top of all of this, to go through each token range and scan it, we have a means of executing a distributed processing job, that uses only the local portions of the Cassandra tables as input.

Stay tuned for my next post, when I show how to plug this into HPCC.