Friday, August 22, 2014

Working with Hadoop Eco systems tools : Exception and Solution

Error & Solution : Hadoop Eco-System tools
This is the continue post on Error & Solution during setup Hadoop HA

Here I have discussed few error / issues during Automatic Fail-over configuration a part of the Hadoop HA setup.

Error 1)
Application application_1406816598739_0003 failed 2 times due to Error launching appattempt_1406816598739_0003_000002. Got exception: java.io.IOException: Failed on local exception: java.net.SocketException: Network is unreachable; Host Details : local host is: "localhost/127.0.0.1"; destination host is: "boss":32939;
Solution
no IP address assigned for the node. manually assigned just like
root@boss[bin]#sudo ifconfig eth0 10.184.36.194


Error 2)

root@solaiv[sqoop]# bin/sqoop import --connect jdbc:mysql://localhost/hadoop --table movies --username root -P --split-by id

Exception in thread "main" java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected
Solution
Download and put the "sqoop-1.4.4-hadoop200.jar" to SQOOP_HOME


Error 3)
PIG, while executing PIG statement from GRUNT
grunt> cnt = foreach grpd generate group, count(words) as nos; Failed to generate logical plan. Nested exception: org.apache.pig.backend.executionengine.ExecException: ERROR 1070: Could not resolve count using imports: [, java.lang., org.apache.pig.builtin., org.apache.pig.impl.builtin.]

Solution
count in pig statement shoud be upper case COUNT
grunt> cnt = foreach grpd generate group, COUNT(words) as nos;


Error 4)
using SQOOP: MySQL to HIVE
root@boss:/opt/sqoop-1.4.4# bin/sqoop import --connect jdbc:mysql://localhost:3306/hadoop --table csurvey --target-dir /sqoop-hive --hive-import --split-by pname --hive-table csurveysu -username root -P

ERROR security.UserGroupInformation: PriviledgedActionException as:root (auth:SIMPLE) cause:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot delete /tmp/hadoop-yarn/staging/root/.staging/job_1407336913887_0001. Name node is in safe mode. The reported blocks 197 has reached the threshold 0.9990 of total blocks 197. The number of live datanodes 1 has reached the minimum number 2 . Safe mode will be turned off automatically in 3 seconds.
Solution
My Hadoop NameNode was in safe mode. i.e read-only mode for the HDFS cluster. it can't Write.. until enough datanode up for replication After started up enough datanode still my NameNode in safe mode, just manually OFF safemode by
root@localhost:/opt/hadoop-2.2.0# bin/hadoop dfsadmin -safemode leave
Just check the inconsistency of HDFS cluster by

root@solaiv[bin]#root@localhost:/opt/hadoop-2.2.0# bin/hadoop fsck /


Error 5)
Hive show tables does not display table "sqooptest" , which was imported by SQOOP
hive> show tables tablename I have imported MySQL to HVE using SQOOP, once it done I can able to see from Hadoop File Systes (HDFS), but When I use "show tables tablname" from HIVE console, it throws no table exists

Solution
just enter into hiveQL from where you used the sqoop import commands.
root@solaiv[bin]# cd /opt root@solaiv[opt]# $SQOOP_HOME/bin/sqoop import --connect jdbc:mysql://localhost:3306/hadoop --table huser --hive-import --split-by name --hive-table sqooptest -username root -P execute hive from /opt, if you are trying to enter other-then this dir, you will not able to view root@solaiv[opt]# $HIVE_HOME/bin/hive hive> show tables tablename


More clarity..
By default HIVE meta data stored in derby database. Derby store the data into current working dir. that can be customize by user "hive-default.xml"

No comments: