Additionally, HPCC typically operates against data residing on filesystems akin to HDFS. And just like with HDFS, once you move beyond log file processing and static data snapshots, you quickly develop a desire for a database backend.
In fact, I'd say this is a general industry trend: HDFS->HBase, S3->Redshift, etc. Eventually, you want to decrease the latency of analytics (to near zero). To do this, you setup some sort of distributed database, capable of supporting both batch processing as well as data streaming/micro-batching. And you adopt an immutable/incremental approach to data storage, which allows you to collapse your infrastructure and stream data into the system as it is being analyzed. (simplifying everything int he process)
But I digress, as a step in that direction...
We can leverage the Java Integration capabilities within HPCC to support User Defined Functions in Java. Likewise, we can leverage the same facilities to add additional backend storage mechanisms (e.g. Cassandra). More specifically, let's have a look at the streaming capabilities of HPCC/Java integration to get data out of an external source.
Let's first look at vanilla Java integration.
If you have an HPCC environment setup, the java integration starts with the /opt/HPCCSystems/classes path. You can drop classes and jar files into that location, and the funcions will be available from ECL. Follow this page for instructions.
If you run into issues, go through the troubleshooting guide on that page. The hardest part is getting HPCC to find your classes. For me, I ran into a nasty jdk version issue. By default, HPCC was picking up an old JDK version on my Ubuntu machine. Since it was using an old version, HPCC could not find the classes compiled with the "new" JDK(1.7), which resulted in the cryptic message, "Failed to resolve class name". If you run into this, pull the patch I submitted to fix this for Ubuntu.
IMPORT java; integer add1(integer val) := IMPORT(java, 'JavaCat.add1:(I)I'); output(add1(10));
This is pretty neat, and as the documentation suggests, you can return XML from the Java method if the data is complex. But what do you do if you have a TON of data, more than can reside in memory? Well, then you need Java streaming to HPCC. ;)
Instead of returning the actual data from the imported method, we return a java Iterator. HPCC then uses the Iterator to construct a dataset. The following is an example Iterator.
import java.util.ArrayList; import java.util.Iterator; import java.util.List; public class DataStream implements Iterator { private int position = 0; private int size = 5; public static Iterator stream(String foo, String bar){ return new DataStream(); } @Override public boolean hasNext() { position++; return (position < size); } @Override public Row next() { return new Row("row"); } @Override public void remove() { } }
This is a standard Iterator, but notice that it returns a Row object, which is defined as this:
import java.util.ArrayList; import java.util.Iterator; import java.util.List; public class Row { private String value; public Row(String value){ this.value = value; } }
The object is a java bean. HPCC will set the values of the member variables as it maps into the DATASET. To see exactly how this happens, let's look at the ECL code:
IMPORT java;
rowrec := record string value; end;
DATASET(rowrec) stream() := IMPORT(java, 'DataStream.stream:(Ljava/lang/String;Ljava/lang/String;)Ljava/util/Iterator;');
output(stream());
After the import statement, we define a type of record called rowrec. In the following line, we import the UDF, and type the result as a DATASET that contains rowrecs. The names of the fields in rowrec must match the names of the member variables on the java bean. HPCC will use the iterator, and populate the dataset with the return of the next() method. The final line of the ECL outputs the results returned.
I've committed all of the above code to a github repository with some instructions on getting it running. Have fun.
Stay tuned for more...
Imagine combining the java streaming capabilities outlined here, with the ability to stream data out out of Cassandra as detailed in my previous post. The result is a powerful means of running batch analytics using Thor, against data stored in Cassandra (with data locality!)... (possibly enabling ECL jobs against data ingested via live real-time event streams! =)
No comments:
Post a Comment