大数据问题排查系列-大数据集群开启kerberos认证后hive作业执行失败(代码片段)

明哥的IT随笔 明哥的IT随笔     2022-12-29     425

关键词:

大数据问题排查系列-大数据集群开启 kerberos 认证后 HIVE 作业执行失败

1 前言

大家好,我是明哥!

本文是大数据问题排查系列kerberos问题排查子序列博文之一,讲述大数据集群开启 kerberos 安全认证后,hive作业执行失败的根本原因,解决方法与背后的原理和机制。

以下是正文。

2 问题现象

大数据集群开启 kerberos 安全认证后,HIVE ON SPARK 作业执行失败。通过客户端 beeline 提交作业,报错 spark client 创建失败,其报错信息是:

Failed to create spark client for spark session xxx: java.util.concurrent.TimeoutException: client xxx timedout waiting for connection from the remote spark driver

或者是:

Failed to create spark client for spark session xxx: java.lang.RuntimeException: spark-submit

客户端 beeline 的报错信息截图如下图所示:

3 问题分析

按照问题排查的常规思路,我们首先查看 hiveserver2 的日志,能发现核心报错信息 “Error while waiting for Remote Spark Driver to connect back to HiveServer2”,hiveserver2 的完整相关日志如下所示:

2021-09-02 11:01:29,496 ERROR org.apache.hive.spark.client.SparkClientImpl: [HiveServer2-Background-Pool: Thread-135]: Error while waiting for Remote Spark Driver to connect back to HiveServer2.
java.util.concurrent.ExecutionException: java.lang.RuntimeException: spark-submit process failed with exit code 1 and error ?
	at io.netty.util.concurrent.AbstractFuture.get(AbstractFuture.java:41) ~[netty-common-4.1.17.Final.jar:4.1.17.Final]
	at org.apache.hive.spark.client.SparkClientImpl.<init>(SparkClientImpl.java:103) [hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
	at org.apache.hive.spark.client.SparkClientFactory.createClient(SparkClientFactory.java:90) [hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
	at org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.createRemoteClient(RemoteHiveSparkClient.java:104) [hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
	at org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.<init>(RemoteHiveSparkClient.java:100) [hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
	at org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.createHiveSparkClient(HiveSparkClientFactory.java:77) [hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
	at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:131) [hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
	at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getSession(SparkSessionManagerImpl.java:132) [hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
	at org.apache.hadoop.hive.ql.exec.spark.SparkUtilities.getSparkSession(SparkUtilities.java:131) [hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
	at org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:122) [hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
	at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) [hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
	at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) [hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
	at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2200) [hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
	at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1843) [hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
	at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1563) [hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1339) [hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1334) [hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
	at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:256) [hive-service-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
	at org.apache.hive.service.cli.operation.SQLOperation.access$600(SQLOperation.java:92) [hive-service-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
	at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:345) [hive-service-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
	at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_201]
	at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_201]
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) [hadoop-common-3.0.0-cdh6.3.2.jar:?]
	at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:357) [hive-service-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_201]
	at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_201]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_201]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_201]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_201]
Caused by: java.lang.RuntimeException: spark-submit process failed with exit code 1 and error ?
	at org.apache.hive.spark.client.SparkClientImpl$2.run(SparkClientImpl.java:495) ~[hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
	... 1 more
2021-09-02 11:01:29,505 ERROR org.apache.hadoop.hive.ql.exec.spark.SparkTask: [HiveServer2-Background-Pool: Thread-135]: Failed to execute Spark task "Stage-1"
org.apache.hadoop.hive.ql.metadata.HiveException: Failed to create Spark client for Spark session f43a158c-168a-4117-8993-8f1780913715_0: java.lang.RuntimeException: spark-submit process failed with exit code 1 and error ?
	at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.getHiveException(SparkSessionImpl.java:286) ~[hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
	at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:135) ~[hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
	at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getSession(SparkSessionManagerImpl.java:132) ~[hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
	at org.apache.hadoop.hive.ql.exec.spark.SparkUtilities.getSparkSession(SparkUtilities.java:131) ~[hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
	at org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:122) [hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
	at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) [hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
	at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) [hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
	at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2200) [hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
	at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1843) [hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
	at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1563) [hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1339) [hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1334) [hive-exec-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
	at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:256) [hive-service-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
	at org.apache.hive.service.cli.operation.SQLOperation.access$600(SQLOperation.java:92) [hive-service-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
	at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:345) [hive-service-2.1.1-cdh6.3.2.jar:2.1.1-cdh6.3.2]
	at java.security.AccessController.doPrivileged(Native Method) ~[?开启kerberos安全的大数据环境中,yarncontainer启动失败导致作业失败

大数据问题排查系列-开启Kerberos安全的大数据环境中,YarnContainer启动失败导致spark/hive作业失败前言大家好,我是明哥!最近在若干个不同客户现场,都遇到了大数据集群中开启Kerberos后,spark/hive作业提交到YAR... 查看详情

开启kerberos安全的大数据环境中,yarncontainer启动失败导致作业失败

大数据问题排查系列-开启Kerberos安全的大数据环境中,YarnContainer启动失败导致spark/hive作业失败前言大家好,我是明哥!最近在若干个不同客户现场,都遇到了大数据集群中开启Kerberos后,spark/hive作业提交到YAR... 查看详情

大数据问题排查系列-hive踩坑记

...hive3.0才修复。以下是正文。问题现象cdh6.2.1中,开启kerberos和sentry的hive中,使用dml语句insertoverwrite 查看详情

大数据线上问题排查系列-hive踩坑记(代码片段)

...hive3.0才修复。以下是正文。问题现象cdh6.2.1中,开启kerberos和sentry的hive中,使用d 查看详情

大数据问题排查系列-tdh大数据平台中hive作业长时间无法执行结束

大数据问题排查系列-TDH大数据平台中HIVE作业长时间无法执行结束前言大家好,我是明哥!本片博文是“大数据问题排查系列”之一,讲述某星环TDH大数据平台中,研发同学提交的Hive作业在成功提交后,客户... 查看详情

大数据问题排查系列-tdh大数据平台中hive作业长时间无法执行结束

前言大家好,我是明哥!本片博文是“大数据问题排查系列”之一,讲述某星环TDH大数据平台中,研发同学提交的Hive作业在成功提交后,客户端长时间收不到任何结果信息也收不到任何报错信息问题的排查。... 查看详情

大数据问题排查系列-因hive中元数据与hdfs中实际的数据不一致引起的问题的修复(代码片段)

大数据问题排查系列-因HIVE中元数据与HDFS中实际的数据不一致引起的问题的修复前言大家好,我是明哥!本片博文是“大数据问题排查系列”之一,讲述某HIVESQL作业因为HIVE中的元数据与HDFS中实际的数据不一致引起的... 查看详情

大数据问题排查系列-hdfsfilesystemapi的正确打开方式,你get了吗?(代码片段)

大数据问题排查系列-HDFSFileSystemAPI的正确打开方式,你GET了吗?前言大家好,我是明哥!本片博文是“大数据问题排查系列”之一,我们首先会聊聊一个问题的现象原因和解决方法,然后给出HDFSFileSystemAPI... 查看详情

大数据问题排查系列-sparkstandaloneha模式的一个缺陷点与应对方案(代码片段)

大数据问题排查系列-SPARKSTANDALONEHA模式的一个缺陷点与应对方案前言大家好,我是明哥!作为当今离线批处理模式的扛把子,SPARK在绝大多数公司的数据处理平台中都是不可或缺的。而在底层使用的具体资源管理器上&... 查看详情

spark提交任务,两个集群kerberos互信

参考技术Aspark向集群1中的yarn提交任务,任务运行在集群1的yarn容器中。数据写入集群2的hdfs。集群1与集群2开通kerberos互信操作。关于大数据方面技术问题可以咨询,替你解决你的苦恼。参考:https://www.jianshu.com/p/d148af2bda64 查看详情

大数据问题排查系列-同样的hivesql,在cdh与tdh平台执行效率差异的根本原因

前言大家好,我是明哥!公众号已经运维有一段时间了,也写了不少博文,其中很多是从自己解决真实线上问题的实战经历出发,写的经验总结和IT感悟。但由于前期摸索过程中,文风不统一且排版不太好&... 查看详情

cdh大数据权限管理

...try权限管理cdh版本的hadoop在对数据安全上的处理通常采用Kerberos+Sentry的结构。kerberos主要负责平台用户的用户认证,sentry则负责数据的权限管理。ApacheSentry是Cloudera公司发布的一个Hadoop开源组件,它提供了细粒度级、基于角色的... 查看详情

kerberos系列之flink认证配置

...章https://www.cnblogs.com/bainianminguo/p/12548076.html-----------安装kerberoshttps://www.cnblogs.com/bainianminguo/p/12548334.html-----------hadoop的kerberos认证https://www.cnblogs.com/bainianminguo/p/12548175.html-----------zookeeper的kerberos认证https://www.cnblogs.com/bainianminguo/p/12... 查看详情

kerberos系列之spark认证配置

...章https://www.cnblogs.com/bainianminguo/p/12548076.html-----------安装kerberoshttps://www.cnblogs.com/bainianminguo/p/12548334.html-----------hadoop的kerberos认证https://www.cnblogs.com/bainianminguo/p/12548175.html-----------zookeeper的kerberos认证https://www.cnblogs.com/bainianminguo/p/12... 查看详情

大数据系列——hadoop集群完全分布式坏境搭建

前言上一篇我们讲解了Hadoop单节点的安装,并且已经通过VMware安装了一台CentOS6.8的Linux系统,咱们本篇的目标就是要配置一个真正的完全分布式的Hadoop集群,闲言少叙,进入本篇的正题。技术准备VMware虚拟机、CentOS6.864bit安装流... 查看详情

大数据情况下如何排查代码

】大数据情况下如何排查代码【英文标题】:Howtotroubleshootcodeinthecaseofbigdata【发布时间】:2014-07-0607:36:15【问题描述】:我正在尝试实现一个表的thispythonsolutiontocountthenumberoflineswithidenticalcontentinthefirstfewcolumns。这是我的代码:#co... 查看详情

大数据系列之hadoop框架

Hadoop框架中,有很多优秀的工具,帮助我们解决工作中的问题。Hadoop的位置从上图可以看出,越往右,实时性越高,越往上,涉及到算法等越多。越往上,越往右就越火…… Hadoop框架中一些简介 HDFSHDFS,(HadoopDistributedFi... 查看详情

大数据超详细大数据常用框架集群搭建合集|附带详细安装过程(代码片段)

🚀作者:“大数据小禅”🚀简介:本篇文章是对大数据常用框架的系列总结,包括了大数据常用组件的搭建过程。🚀安装包|文档获取:获取对应的安装包可以通过最下方公众号联系我备注获取。一.... 查看详情