在 Cloudera 中使用 sqoop 将数据从 HDFS 导出到 mysql 时作业失败

     2023-04-18     81

关键词:

【中文标题】在 Cloudera 中使用 sqoop 将数据从 HDFS 导出到 mysql 时作业失败【英文标题】:Job fails on exporting data from HDFS to mysql using sqoop in Cloudera 【发布时间】:2018-06-10 17:59:20 【问题描述】:

我正在导出 HDFS 位置 /user/training/sqoop_import/departments_export 目录中的部门导出文件的 HDFS 数据。以下是文件中存在的记录。

2,Fitness
3,Footwear
4,Apparel
5,Golf
6,Outdoors
7,Fan Shop
8,Development
1000,Admin
1001,Books

我想将数据导出到名为departments_export(department_id int, department_name varchar) 的mysql 表中。此表已包含以下数据

mysql> select * from departments_export;
+---------------+-----------------+
| department_id | department_name |
+---------------+-----------------+
|             2 | Fitness         |
|             3 | Footwear        |
|             4 | Apparel         |
|             5 | Golf            |
|             6 | Outdoors        |
|             7 | Fan Shop        |
|             8 | Development     |
|          1000 | Admin           |
+---------------+-----------------+

我在命令下运行 sqoop 导出

sqoop export \
--connect "jdbc:mysql://quickstart.cloudera:3306/retail_db" \
--username retail_dba \
--password cloudera \
--table departments_export \
--export-dir /user/training/sqoop_import/departments_export \
--batch \
-m 1 \
--update-key department_id \
--update-mode allowinsert \

我在命令提示符下登录。

Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
18/06/10 10:42:39 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.13.0
18/06/10 10:42:40 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
18/06/10 10:42:41 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
18/06/10 10:42:43 INFO tool.CodeGenTool: Beginning code generation
18/06/10 10:42:43 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `departments_export` AS t LIMIT 1
18/06/10 10:42:44 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `departments_export` AS t LIMIT 1
18/06/10 10:42:44 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-mapreduce
Note: /tmp/sqoop-cloudera/compile/995860db956fe955c309e42de79ab4f9/departments_export.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
18/06/10 10:42:51 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-cloudera/compile/995860db956fe955c309e42de79ab4f9/departments_export.jar
18/06/10 10:42:51 INFO mapreduce.ExportJobBase: Beginning export of departments_export
18/06/10 10:42:51 INFO Configuration.deprecation: mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
18/06/10 10:42:53 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
18/06/10 10:42:57 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
18/06/10 10:42:57 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative
18/06/10 10:42:57 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
18/06/10 10:42:57 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
18/06/10 10:43:00 WARN hdfs.DFSClient: Caught exception 
java.lang.InterruptedException
    at java.lang.Object.wait(Native Method)
    at java.lang.Thread.join(Thread.java:1281)
    at java.lang.Thread.join(Thread.java:1355)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:967)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:705)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:894)
18/06/10 10:43:00 WARN hdfs.DFSClient: Caught exception 
java.lang.InterruptedException
    at java.lang.Object.wait(Native Method)
    at java.lang.Thread.join(Thread.java:1281)
    at java.lang.Thread.join(Thread.java:1355)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:967)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:705)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:894)
18/06/10 10:43:00 WARN hdfs.DFSClient: Caught exception 
java.lang.InterruptedException
    at java.lang.Object.wait(Native Method)
    at java.lang.Thread.join(Thread.java:1281)
    at java.lang.Thread.join(Thread.java:1355)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:967)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:705)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:894)
18/06/10 10:43:01 WARN hdfs.DFSClient: Caught exception 
java.lang.InterruptedException
    at java.lang.Object.wait(Native Method)
    at java.lang.Thread.join(Thread.java:1281)
    at java.lang.Thread.join(Thread.java:1355)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:967)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:705)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:894)
18/06/10 10:43:01 WARN hdfs.DFSClient: Caught exception 
java.lang.InterruptedException
    at java.lang.Object.wait(Native Method)
    at java.lang.Thread.join(Thread.java:1281)
    at java.lang.Thread.join(Thread.java:1355)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:967)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:705)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:894)
18/06/10 10:43:01 WARN hdfs.DFSClient: Caught exception 
java.lang.InterruptedException
    at java.lang.Object.wait(Native Method)
    at java.lang.Thread.join(Thread.java:1281)
    at java.lang.Thread.join(Thread.java:1355)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:967)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:705)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:894)
18/06/10 10:43:02 WARN hdfs.DFSClient: Caught exception 
java.lang.InterruptedException
    at java.lang.Object.wait(Native Method)
    at java.lang.Thread.join(Thread.java:1281)
    at java.lang.Thread.join(Thread.java:1355)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:967)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:705)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:894)
18/06/10 10:43:02 WARN hdfs.DFSClient: Caught exception 
java.lang.InterruptedException
    at java.lang.Object.wait(Native Method)
    at java.lang.Thread.join(Thread.java:1281)
    at java.lang.Thread.join(Thread.java:1355)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:967)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:705)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:894)
18/06/10 10:43:02 WARN hdfs.DFSClient: Caught exception 
java.lang.InterruptedException
    at java.lang.Object.wait(Native Method)
    at java.lang.Thread.join(Thread.java:1281)
    at java.lang.Thread.join(Thread.java:1355)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:967)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:705)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:894)
18/06/10 10:43:02 WARN hdfs.DFSClient: Caught exception 
java.lang.InterruptedException
    at java.lang.Object.wait(Native Method)
    at java.lang.Thread.join(Thread.java:1281)
    at java.lang.Thread.join(Thread.java:1355)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:967)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:705)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:894)
18/06/10 10:43:02 WARN hdfs.DFSClient: Caught exception 
java.lang.InterruptedException
    at java.lang.Object.wait(Native Method)
    at java.lang.Thread.join(Thread.java:1281)
    at java.lang.Thread.join(Thread.java:1355)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:967)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:705)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:894)
18/06/10 10:43:03 WARN hdfs.DFSClient: Caught exception 
java.lang.InterruptedException
    at java.lang.Object.wait(Native Method)
    at java.lang.Thread.join(Thread.java:1281)
    at java.lang.Thread.join(Thread.java:1355)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:967)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:705)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:894)
18/06/10 10:43:04 WARN hdfs.DFSClient: Caught exception 
java.lang.InterruptedException
    at java.lang.Object.wait(Native Method)
    at java.lang.Thread.join(Thread.java:1281)
    at java.lang.Thread.join(Thread.java:1355)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:967)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:705)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:894)
18/06/10 10:43:04 INFO input.FileInputFormat: Total input paths to process : 1
18/06/10 10:43:04 INFO input.FileInputFormat: Total input paths to process : 1
18/06/10 10:43:04 WARN hdfs.DFSClient: Caught exception 
java.lang.InterruptedException
    at java.lang.Object.wait(Native Method)
    at java.lang.Thread.join(Thread.java:1281)
    at java.lang.Thread.join(Thread.java:1355)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:967)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:705)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:894)
18/06/10 10:43:04 WARN hdfs.DFSClient: Caught exception 
java.lang.InterruptedException
    at java.lang.Object.wait(Native Method)
    at java.lang.Thread.join(Thread.java:1281)
    at java.lang.Thread.join(Thread.java:1355)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:967)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:705)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:894)
18/06/10 10:43:04 INFO mapreduce.JobSubmitter: number of splits:1
18/06/10 10:43:04 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative
18/06/10 10:43:05 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1527924460497_0039
18/06/10 10:43:06 INFO impl.YarnClientImpl: Submitted application application_1527924460497_0039
18/06/10 10:43:06 INFO mapreduce.Job: The url to track the job: http://quickstart.cloudera:8088/proxy/application_1527924460497_0039/
18/06/10 10:43:06 INFO mapreduce.Job: Running job: job_1527924460497_0039
18/06/10 10:43:33 INFO mapreduce.Job: Job job_1527924460497_0039 running in uber mode : false
18/06/10 10:43:33 INFO mapreduce.Job:  map 0% reduce 0%
18/06/10 10:43:59 INFO mapreduce.Job:  map 100% reduce 0%
18/06/10 10:43:59 INFO mapreduce.Job: Job job_1527924460497_0039 failed with state FAILED due to: Task failed task_1527924460497_0039_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0

18/06/10 10:43:59 INFO mapreduce.Job: Counters: 8
    Job Counters 
        Failed map tasks=1
        Launched map tasks=1
        Data-local map tasks=1
        Total time spent by all maps in occupied slots (ms)=22735
        Total time spent by all reduces in occupied slots (ms)=0
        Total time spent by all map tasks (ms)=22735
        Total vcore-milliseconds taken by all map tasks=22735
        Total megabyte-milliseconds taken by all map tasks=23280640
18/06/10 10:43:59 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
18/06/10 10:43:59 INFO mapreduce.ExportJobBase: Transferred 0 bytes in 62.3015 seconds (0 bytes/sec)
18/06/10 10:43:59 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
18/06/10 10:43:59 INFO mapreduce.ExportJobBase: Exported 0 records.
18/06/10 10:43:59 ERROR tool.ExportTool: Error during export: 
Export job failed!
    at org.apache.sqoop.mapreduce.ExportJobBase.runExport(ExportJobBase.java:439)
    at org.apache.sqoop.manager.SqlManager.updateTable(SqlManager.java:965)
    at org.apache.sqoop.tool.ExportTool.exportTable(ExportTool.java:70)
    at org.apache.sqoop.tool.ExportTool.run(ExportTool.java:99)
    at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
    at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
    at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243)
    at org.apache.sqoop.Sqoop.main(Sqoop.java:252)

【问题讨论】:

【参考方案1】:

departments_export 表是否有主键,如果没有,请为 department_id 创建主键。

试试这个。

sqoop 导出\ --connect "jdbc:mysql://quickstart.cloudera:3306/retail_db" \ --用户名retail_dba \ --密码cloudera \ --表部门_出口\ --update-key 部门 ID \ --更新模式允许插入\ --export-dir /user/training/sqoop_import/departments_export/departments.txt \ --fields-terminated-by=',' -m 1 \

我试过了,效果很好。

【讨论】:

我已将部门 ID 更新为主键,并尝试运行与您提到的类似的导出命令,但获取作业失败。 我对其进行了测试,甚至更新了部门.txt 文件并再次运行 sqoop 命令,它正在更新记录以及插入新记录。您是否在 export-dir 中提到了完整的文件名。甚至我也收到了 java 警告,但它最后正在插入和更新。

将数据从 sql server 导入 HDFS 时出现 Sqoop 错误

...述】:使用sqoop将数据从sqlserver导入HDFS时出现以下错误:cloudera@cloudera-vm:/lib$sqooplist-databases--connectjdbc:sqlserver:/ 查看详情

从 MySQL 导入 Sqoop:小数始终作为字符串导入 Hive

...918:32:49【问题描述】:我正在尝试将数据从MySQL导入Hive(Cloudera5.8集群),其中包含用于HDFS的4个数据节点。使用Sqoop1.4.6(包含在Cloudera5.8中)我一直在尝试直接从 查看详情

大数据:Sqoop-导出错误

...令时,我收到以下错误“输入路径不存在:hdfs://quickstart.cloudera:8020/home/cloudera/Test5”。我检查了路径/home/cloudera/Test5,并且该文件存在于路径中。从sqoop配置 查看详情

使用 Sqoop 将视图(数据库表重)从 Oracle 迁移到 Hive

...问题描述】:我是大数据的初学者,我们正在使用sqoop和Cloudera管理将视图从Oracle迁移到Hive,我们正在使用的命令示例:sqoopimport-Dmapred.child. 查看详情

使用 sqoop 将数据从 Teradata 迁移到 Hive

】使用sqoop将数据从Teradata迁移到Hive【英文标题】:DataMigrationformTeradatatoHiveusingsqoop【发布时间】:2018-02-1309:12:10【问题描述】:我正在尝试使用sqoop将数据从teradata迁移到hive,但在teradata表中有一些字段具有图形和vargraphic等数据... 查看详情

sqoop 将数据从 Teradata 传输到 HDFS

】sqoop将数据从Teradata传输到HDFS【英文标题】:sqooptotransferdatatoHDFSfromTeradata【发布时间】:2014-12-2506:55:35【问题描述】::得到如下错误:-bash-4.1$sqoopimport--connection-managercom.cloudera.sqoop.manager.DefaultManagerFactory--drivercom.teradata.j 查看详情

无法使用 Sqoop 将数据从 Vertica 导入 Cassandra

】无法使用Sqoop将数据从Vertica导入Cassandra【英文标题】:UnabletoimportdatafromVerticatoCassandrausingSqoop【发布时间】:2014-10-3014:53:49【问题描述】:我正在尝试使用Sqoop将表从Vertica导入DataStaxEnterprise4.5。没有报错也没有异常,但是目标... 查看详情

使用 SQOOP 和 FLUME 将数据从 RDBMS 移动到 Hadoop

】使用SQOOP和FLUME将数据从RDBMS移动到Hadoop【英文标题】:DatamovingfromRDBMStoHadoop,usingSQOOPandFLUME【发布时间】:2014-03-1803:41:23【问题描述】:我正在学习Hadoop,并且在将数据从关系数据库移动到Hadoop以及反之亦然的过程中遇到了一... 查看详情

运行 sqoop list 数据库命令时出现 cloudera 链接错误

】运行sqooplist数据库命令时出现cloudera链接错误【英文标题】:clouderalinkerrorwhilerunsqooplistdatabasecommand【发布时间】:2017-09-2216:53:04【问题描述】:我试图在cloudera中运行以下命令并出现链接失败错误。我也试过重启mysqld服务,没... 查看详情

使用 cloudera hadoop UI 自动化 sqoop 增量

】使用clouderahadoopUI自动化sqoop增量【英文标题】:automatesqoopincrementalusingclouderahadoopUI【发布时间】:2017-11-1809:48:44【问题描述】:有没有办法使用时间戳以外的列值自动执行sqoop导入。我试图在我的表中使用一列(ID)。但它没... 查看详情

使用 sqoop 将数据从 oracle 导入到 hdfs

】使用sqoop将数据从oracle导入到hdfs【英文标题】:dataimportfromoracletohdfswithsqoop【发布时间】:2016-06-2714:03:42【问题描述】:您好,我在尝试使用sqoop将数据从Oracle导入HDFS时遇到以下错误。使用的命令-sqoopimport--connectjdbc:oracle:thin:sy... 查看详情

Apache Sqoop 连接错误

...p从mysql数据库中列出数据库时出现以下错误。我正在使用ClouderaVMCDH4,但默认情况下它没有预安装MySql。我按照cloudera教程安装了MySql。现在我试图从MySQl列出数据库,但它失败了。是否存在任何jdbc连接问题?[cloude 查看详情

使用 Sqoop 将数据从 RDBMS 导入 Hive 时,如何在命令行中指定 Hive 数据库名称?

】使用Sqoop将数据从RDBMS导入Hive时,如何在命令行中指定Hive数据库名称?【英文标题】:HowtospecifyHivedatabasenameincommandlinewhileimportingdatafromRDBMSintoHiveusingSqoop?【发布时间】:2013-01-2512:20:49【问题描述】:我需要将数据从RDBMS表导入... 查看详情

使用 talend 或 sqoop 将数据从 hbase 导入 hdfs

】使用talend或sqoop将数据从hbase导入hdfs【英文标题】:importdatafromhbasetohdfsusingtalendorsqoop【发布时间】:2017-01-1612:05:55【问题描述】:是否可以使用Talend或sqoop将数据从hbase导出到hdfs?如果它不是直接可用的,那么实现这种用例的... 查看详情

使用 Sqoop 将数据从 teradata 导入 Hive

】使用Sqoop将数据从teradata导入Hive【英文标题】:UsingSqooptoimportdatafromteradatatoHive【发布时间】:2016-03-1713:27:57【问题描述】:我必须提取teradata中的表架构,但我没有这些表的权限。是否可以在没有选择访问权限的情况下通过sqoo... 查看详情

使用 SQOOP 从 RDBMS 导入数据

】使用SQOOP从RDBMS导入数据【英文标题】:ImportingdatafromRDBMSusingSQOOP【发布时间】:2016-07-1210:36:14【问题描述】:我正在尝试将数据从SQL服务器数据库导入Hive。基本上我将整个数据库(除了一些表)导入Hive。在这里,我有一些我... 查看详情

使用 Oozie 和 Sqoop 将数据从 HiveQL 导出到 MySQL

】使用Oozie和Sqoop将数据从HiveQL导出到MySQL【英文标题】:ExportDatafromHiveQLtoMySQLusingOoziewithSqoop【发布时间】:2019-05-2712:00:31【问题描述】:我在Hive中有一个表(定期更新),我想在我的一个具有MySQL数据库的工具中拥有它。我不... 查看详情

将数据从蜂巢中取出并输入 mysql @ AWS?

...不认为仅仅为此而在ElasticMapReduce(我非常喜欢)上运行Cloudera堆栈@AWS是值得的。我目前的想法只是将我需要的数据写入到@S3的外部表中,然后编写脚本将其导入mysql。亚马逊有一些Simple 查看详情