Tuesday, October 14, 2014

Sqoop Oracle Example : Getting Started with Oracle -> HDFS import/extract


In this post, we'll get Sqoop (1.99.3) connected to an Oracle database, extracting records to HDFS.

Add Oracle Driver to Sqoop Classpath

The first thing we'll need to do is copy the oracle JDBC jar file into the Sqoop lib directory.  Note, this directly may not exist.  You may need to create it.

For me, this amounted to:
➜  sqoop  mkdir lib
➜  sqoop  cp ~/git/boneill/data-lab/lib/ojdbc6.jar ./lib

Add YARN and HDFS to Sqoop Classpath

Next, you will need to add the HDFS and YARN jar files to the classpath of Sqoop.  If you recall from the initial setup, the classpath is controlled by the common.loader property in the server/conf/catalina.properties file.  To get things submitting to the YARN cluster properly, I added the following additional paths to the common.loader property:

common.loader=${catalina.base}/lib,${catalina.base}/lib/*.jar,${catalina.home}/lib,${catalina.home}/lib/*.jar,${catalina.home}/../lib/*.jar,/Users/bone/tools/hadoop/share/hadoop/common/*.jar,/Users/bone/tools/hadoop/share/hadoop/yarn/lib/*.jar,/Users/bone/tools/hadoop/share/hadoop/mapreduce/*.jar,/Users/bone/tools/hadoop/share/hadoop/tools/lib/*.jar,/Users/bone/tools/hadoop/share/hadoop/common/lib/*.jar,/Users/bone/tools/hadoop/share/hadoop/hdfs/*.jar,/Users/bone/tools/hadoop/share/hadoop/yarn/*.jar

Note, the added paths.

*IMPORTANT* : Restart your Sqoop server so it picks up the new jar files. 
(including the driver jar!)

Create JDBC Connection

After that, we can fire up the client, and create a connection with the following:

bin/sqoop.sh client
...
sqoop> create connection --cid 1
Creating connection for connector with id 1
Please fill following values to create new connection object
Name: my_datasource
Connection configuration
JDBC Driver Class: oracle.jdbc.driver.OracleDriver
JDBC Connection String: jdbc:oracle:thin:@change.me:1521:service.name
Username: your.user
Password: ***********
JDBC Connection Properties:
There are currently 0 values in the map:
entry# HIT RETURN HERE!
Security related configuration options
Max connections: 10
New connection was successfully created with validation status FINE and persistent id 1

Create Sqoop Job

Next step is to make a job.  This is done with the following:

sqoop> create job --xid 1 --type import
Creating job for connection with id 1
Please fill following values to create new job object
Name: data_import

Database configuration

Schema name: MY_SCHEMA
Table name: MY_TABLE
Table SQL statement:
Table column names:
Partition column name: UID
Nulls in partition column:
Boundary query:

Output configuration

Storage type:
  0 : HDFS
Choose: 0
Output format:
  0 : TEXT_FILE
  1 : SEQUENCE_FILE
Choose: 0
Compression format:
  0 : NONE
...
Choose: 0
Output directory: /user/boneill/dump/

Throttling resources

Extractors:
Loaders:
New job was successfully created with validation status FINE  and persistent id 3

Everything is fairly straight-forward. The output directory is the HDFS directory to which the output will be written.

 Run the job!

This was actually the hardest step because the documentation is out of date. (AFAIK)  Instead of using "submission", as the documentation states.  Use the following:

sqoop> start job --jid 1
Submission details
Job ID: 3
Server URL: http://localhost:12000/sqoop/
Created by: bone
Creation date: 2014-10-14 13:27:57 EDT
Lastly updated by: bone
External ID: job_1413298225396_0001
	http://your_host:8088/proxy/application_1413298225396_0001/
2014-10-14 13:27:57 EDT: BOOTING  - Progress is not available

From there, you should be able to see the job in YARN!

After a bit of churning, you should be able to go over to HDFS and find your files in the output directory.

Best of luck all.  Let me know if you have any trouble.



Monday, October 13, 2014

Sqoop 1.99.3 w/ Hadoop 2 Installation / Getting Started Craziness (addtowar.sh not found, common.loader, etc.)


We have a ton of data in relational databases that we are looking to migrate onto our Big Data platform. S We took an initial look around and decided Sqoop might be worth a try.   I ran into some trouble getting Sqoop up and running.  Here in lies that story...

The main problem is the documentation (and google).  It appears as though Sqoop changed install processes between minor dot releases.  Google will likely land you on this documentation:
http://sqoop.apache.org/docs/1.99.1/Installation.html

That documentation mentions a shell script, ./bin/addtowar.sh.  That shell script no longer exists in sqoop version 1.99.3.  Instead you should reference this documentation:
http://sqoop.apache.org/docs/1.99.3/Installation.html

In that documentation, they mention the common.loader property in server/conf/catalina.properties.   If you haven't been following the Tomcat scene, that is the new property that allows you to load jar files onto your classpath without dropping them into $TOMCAT/lib, or your war file. (yuck)

To get Sqoop running, you'll need all of the Hadoop jar files (and the transitive dependencies) on the CLASSPATH when Sqoop/Tomcat starts up.  And unless, you add all of the Hadoop jar files to this property, you will end up with any or all of the following CNFE/NCDFE exceptions in your log file (found in server/logs/localhost*.log):

java.lang.ClassNotFoundException: org.apache.commons.logging.LogFactory
java.lang.NoClassDefFoundError: org/apache/hadoop/mapred/JobClient
java.lang.NoClassDefFoundError: org/apache/commons/configuration/Configuration

Through trial and error, I found all of the paths needed for the common.loader property.  I ended up with the following in my catalina.properties:

common.loader=${catalina.base}/lib,${catalina.base}/lib/*.jar,${catalina.home}/lib,${catalina.home}/lib/*.jar,${catalina.home}/../lib/*.jar,/Users/bone/tools/hadoop/share/hadoop/common/*.jar,/Users/bone/tools/hadoop/share/hadoop/yarn/lib/*.jar,/Users/bone/tools/hadoop/share/hadoop/mapreduce/*.jar,/Users/bone/tools/hadoop/share/hadoop/tools/lib/*.jar,/Users/bone/tools/hadoop/share/hadoop/common/lib/*.jar

That got me past all of the classpath issues.  Note, in my case /Users/bone/tools/hadoop was a complete install of Hadoop 2.4.0.

I also ran into this exception:

Caused by: org.apache.sqoop.common.SqoopException: MAPREDUCE_0002:Failure on submission engine initialization - Invalid Hadoop configuration directory (not a directory or permission issues): /etc/hadoop/conf/

That path has to point to your Hadoop conf directory.   You can find this setting in server/conf/sqoop.properties.  I updated mine to:
org.apache.sqoop.submission.engine.mapreduce.configuration.directory=/Users/bone/tools/hadoop/etc/hadoop
(Again, /Users/bone/tools/hadoop is the directory of my hadoop installation)

OK ---  Now, you should be good to go!

Start the server with:
bin/sqoop.sh server start

Then, the client should work! (as shown below)

bin/sqoop.sh client
...
sqoop:000> set server --host localhost --port 12000 --webapp sqoop
Server is set successfully
sqoop:000> show version --all
client version:
  Sqoop 1.99.3 revision 2404393160301df16a94716a3034e31b03e27b0b
  Compiled by mengweid on Fri Oct 18 14:15:53 EDT 2013
server version:
  Sqoop 1.99.3 revision 2404393160301df16a94716a3034e31b03e27b0b

...

From there, follow this:
http://sqoop.apache.org/docs/1.99.3/Sqoop5MinutesDemo.html

Happy sqooping all.