elasticsearch之client源码简要分析

凌渡冰 凌渡冰     2022-08-01     906

关键词:

问题

让我们带着问题去学习,效率会更高

1  es集群只配置一个节点,client是否能够自动发现集群中的所有节点?是如何发现的?

2  es client如何做到负载均衡?

3  一个es node挂掉之后,es client如何摘掉该节点?

4  es client node检测分为两种模式(SimpleNodeSampler和SniffNodesSampler),有什么不同?

核心类

  • TransportClient    es client对外API类 
  • TransportClientNodesService  维护node节点的类
  • ScheduledNodeSampler   定期维护正常节点类
  • NettyTransport   进行数据传输
  • NodeSampler     节点嗅探器

Client初始化过程

初始化代码

1  Settings.Builder builder = Settings.settingsBuilder()
                                   .put("cluster.name", clusterName)
                                   .put("client.transport.sniff", true);
Settings settings = builder.build(); 
2  TransportClient client = TransportClient.builder().settings(settings).build(); 
3  for (TransportAddress transportAddress : transportAddresses) {
    client.addTransportAddress(transportAddress);
}

1  ES 通过builder模式构造了基础的配置参数;

2  通过build构造了client,这个时候包括构造client、初始化ThreadPool、构造TransportClientNodesService、启动定时任务、定制化嗅探类型;

3  添加集群可用地址,比如我只配了集群中的一个节点;

构建client

调用build API

其中,关于依赖注入的简单说明:Guice 是 Google 用于 Java™ 开发的开放源码依赖项注入框架(感兴趣的可以了解下,这里不做重点讲解),具体可参考下边链接:

  1. https://github.com/google/guice/wiki/GettingStarted
  2. http://www.cnblogs.com/whitewolf/p/4185908.html
  3. http://www.ibm.com/developerworks/cn/java/j-guice.html

初始化TransportClientNodesService

在上一幅图的 modules.createInjector对TransportClientNodesService进行实例化,在TransportClient进行注入,可以看到TransportClient里边的绝大部分API都是通过TransportClientNodesService进行代理的

Guice通过注解进行注入

 在上图中:注入了集群名称、线程池等,重点是如下代码:该段代码选择了节点嗅探器的类型  嗅探同一集群中的所有节点(SniffNodesSampler)或者是只关注配置文件配置的节点(SimpleNodeSampler)

if (this.settings.getAsBoolean("client.transport.sniff", false)) {
    this.nodesSampler = new SniffNodesSampler();
} else {
    this.nodesSampler = new SimpleNodeSampler();
}

特点:

SniffNodesSampler:client会主动发现集群里的其他节点,会创建fully connect(什么叫fully connect?后边说)
SimpleNodeSampler:ping listedNodes中的所有node,区别在于这里创建的都是light connect;

其中TransportClientNodesService维护了三个节点存储数据结构:

// nodes that are added to be discovered
1 private volatile List<DiscoveryNode> listedNodes = Collections.emptyList();
2 private volatile List<DiscoveryNode> nodes = Collections.emptyList();
3 private volatile List<DiscoveryNode> filteredNodes = Collections.emptyList();

1    代表配置文件中主动加入的节点;

2    代表参与请求的节点;

3    过滤掉的不能进行请求处理的节点;

Client如何做到负载均衡

如上图,我们发现每次 execute 的时候,是从 nodes 这个数据结构中获取节点,然后通过简单的 rouund-robbin 获取节点服务器;核心代码如下:

private final AtomicInteger randomNodeGenerator = new AtomicInteger();
......
private int getNodeNumber() {
    int index = randomNodeGenerator.incrementAndGet();
    if (index < 0) {
        index = 0;
        randomNodeGenerator.set(0);
    }
    return index;
}

然后通过netty的channel将数据写入,核心代码如下:

public void sendRequest(final DiscoveryNode node, final long requestId, final String action, final TransportRequest request, TransportRequestOptions options) throws IOException, TransportException {
 
1    Channel targetChannel = nodeChannel(node, options); 
 
    if (compress) {
        options = TransportRequestOptions.builder(options).withCompress(true).build();
    }
 
    byte status = 0;
    status = TransportStatus.setRequest(status);
 
    ReleasableBytesStreamOutput bStream = new ReleasableBytesStreamOutput(bigArrays);
    boolean addedReleaseListener = false;
    try {
        bStream.skip(NettyHeader.HEADER_SIZE);
        StreamOutput stream = bStream;
        // only compress if asked, and, the request is not bytes, since then only
        // the header part is compressed, and the "body" can't be extracted as compressed
        if (options.compress() && (!(request instanceof BytesTransportRequest))) {
            status = TransportStatus.setCompress(status);
            stream = CompressorFactory.defaultCompressor().streamOutput(stream);
        }
 
        // we pick the smallest of the 2, to support both backward and forward compatibility
        // note, this is the only place we need to do this, since from here on, we use the serialized version
        // as the version to use also when the node receiving this request will send the response with
        Version version = Version.smallest(this.version, node.version());
 
        stream.setVersion(version);
        stream.writeString(action);
 
        ReleasablePagedBytesReference bytes;
        ChannelBuffer buffer;
        // it might be nice to somehow generalize this optimization, maybe a smart "paged" bytes output
        // that create paged channel buffers, but its tricky to know when to do it (where this option is
        // more explicit).
        if (request instanceof BytesTransportRequest) {
            BytesTransportRequest bRequest = (BytesTransportRequest) request;
            assert node.version().equals(bRequest.version());
            bRequest.writeThin(stream);
            stream.close();
            bytes = bStream.bytes();
            ChannelBuffer headerBuffer = bytes.toChannelBuffer();
            ChannelBuffer contentBuffer = bRequest.bytes().toChannelBuffer();
            buffer = ChannelBuffers.wrappedBuffer(NettyUtils.DEFAULT_GATHERING, headerBuffer, contentBuffer);
        } else {
            request.writeTo(stream);
            stream.close();
            bytes = bStream.bytes();
            buffer = bytes.toChannelBuffer();
        }
        NettyHeader.writeHeader(buffer, requestId, status, version);
2        ChannelFuture future = targetChannel.write(buffer);
        ReleaseChannelFutureListener listener = new ReleaseChannelFutureListener(bytes);
        future.addListener(listener);
        addedReleaseListener = true;
        transportServiceAdapter.onRequestSent(node, requestId, action, request, options);
    } finally {
        if (!addedReleaseListener) {
            Releasables.close(bStream.bytes());
        }
    }
}
View Code

其中最重要的就是1和2,中间一段是处理数据和进行一些必要的步骤

1代表拿到一个连接;

2代表通过拿到的连接写数据;

这时候就会有新的问题

1   nodes的数据是何时写入的?

2   连接是什么时候创建的?

Nodes数据何时写入

核心是调用doSampler,代码如下:

protected void doSample() {
    // the nodes we are going to ping include the core listed nodes that were added
    // and the last round of discovered nodes
    Set<DiscoveryNode> nodesToPing = Sets.newHashSet();
    for (DiscoveryNode node : listedNodes) {
        nodesToPing.add(node);
    }
    for (DiscoveryNode node : nodes) {
        nodesToPing.add(node);
    }
 
    final CountDownLatch latch = new CountDownLatch(nodesToPing.size());
    final ConcurrentMap<DiscoveryNode, ClusterStateResponse> clusterStateResponses = ConcurrentCollections.newConcurrentMap();
    for (final DiscoveryNode listedNode : nodesToPing) {
        threadPool.executor(ThreadPool.Names.MANAGEMENT).execute(new Runnable() {
            @Override
            public void run() {
                try {
                    if (!transportService.nodeConnected(listedNode)) {
                        try {
 
                            // if its one of the actual nodes we will talk to, not to listed nodes, fully connect
                            if (nodes.contains(listedNode)) {
                                logger.trace("connecting to cluster node [{}]", listedNode);
                                transportService.connectToNode(listedNode);
                            } else {
                                // its a listed node, light connect to it...
                                logger.trace("connecting to listed node (light) [{}]", listedNode);
                                transportService.connectToNodeLight(listedNode);
                            }
                        } catch (Exception e) {
                            logger.debug("failed to connect to node [{}], ignoring...", e, listedNode);
                            latch.countDown();
                            return;
                        }
                    }
                    //核心是在这里,刚刚开始初始化的时候,可能只有配置的一个节点,这个时候会通过这个地址发送一个state状态监测
                    //"cluster:monitor/state"
                    transportService.sendRequest(listedNode, ClusterStateAction.NAME,
                            headers.applyTo(Requests.clusterStateRequest().clear().nodes(true).local(true)),
                            TransportRequestOptions.builder().withType(TransportRequestOptions.Type.STATE).withTimeout(pingTimeout).build(),
                            new BaseTransportResponseHandler<ClusterStateResponse>() {
 
                                @Override
                                public ClusterStateResponse newInstance() {
                                    return new ClusterStateResponse();
                                }
 
                                @Override
                                public String executor() {
                                    return ThreadPool.Names.SAME;
                                }
 
                                @Override
                                public void handleResponse(ClusterStateResponse response) {
/*通过回调,会在这个地方返回集群中类似下边所有节点的信息
{
  "version" : 27,
  "state_uuid" : "YSI9d_HiQJ-FFAtGFCVOlw",
  "master_node" : "TXHHx-XRQaiXAxtP1EzXMw",
  "blocks" : { },
  "nodes" : {
    "7" : {
      "name" : "es03",
      "transport_address" : "1.1.1.1:9300",
      "attributes" : {
        "data" : "false",
        "master" : "true"
      }
    },
    "6" : {
      "name" : "common02",
      "transport_address" : "1.1.1.2:9300",
      "attributes" : {
        "master" : "false"
      }
    },
    "5" : {
      "name" : "es02",
      "transport_address" : "1.1.1.3:9300",
      "attributes" : {
        "data" : "false",
        "master" : "true"
      }
    },
    "4" : {
      "name" : "common01",
      "transport_address" : "1.1.1.4:9300",
      "attributes" : {
        "master" : "false"
      }
    },
    "3" : {
      "name" : "common03",
      "transport_address" : "1.1.1.5:9300",
      "attributes" : {
        "master" : "false"
      }
    },
    "2" : {
      "name" : "es01",
      "transport_address" : "1.1.1.6:9300",
      "attributes" : {
        "data" : "false",
        "master" : "true"
      }
    },
    "1" : {
      "name" : "common04",
      "transport_address" : "1.1.1.7:9300",
      "attributes" : {
        "master" : "false"
      }
    }
  },
  "metadata" : {
    "cluster_uuid" : "_na1x_",
    "templates" : { },
    "indices" : { }
  },
  "routing_table" : {
    "indices" : { }
  },
  "routing_nodes" : {
    "unassigned" : [ ],
  }
}
*/
                                    clusterStateResponses.put(listedNode, response);
                                    latch.countDown();
                                }
 
                                @Override
                                public void handleException(TransportException e) {
                                    logger.info("failed to get local cluster state for {}, disconnecting...", e, listedNode);
                                    transportService.disconnectFromNode(listedNode);
                                    latch.countDown();
                                }
                            });
                } catch (Throwable e) {
                    logger.info("failed to get local cluster state info for {}, disconnecting...", e, listedNode);
                    transportService.disconnectFromNode(listedNode);
                    latch.countDown();
                }
            }
        });
    }
 
    try {
        latch.await();
    } catch (InterruptedException e) {
        return;
    }
 
    HashSet<DiscoveryNode> newNodes = new HashSet<>();
    HashSet<DiscoveryNode> newFilteredNodes = new HashSet<>();
    for (Map.Entry<DiscoveryNode, ClusterStateResponse> entry : clusterStateResponses.entrySet()) {
        if (!ignoreClusterName && !clusterName.equals(entry.getValue().getClusterName())) {
            logger.warn("node {} not part of the cluster {}, ignoring...", entry.getValue().getState().nodes().localNode(), clusterName);
            newFilteredNodes.add(entry.getKey());
            continue;
        }
//接下来在这个地方拿到所有的data nodes 写入到nodes节点里边
        for (ObjectCursor<DiscoveryNode> cursor : entry.getValue().getState().nodes().dataNodes().values()) {
            newNodes.add(cursor.value);
        }
    }
 
    nodes = validateNewNodes(newNodes);
    filteredNodes = Collections.unmodifiableList(new ArrayList<>(newFilteredNodes));
}
View Code

其中调用时机分为两部分:

1  client.addTransportAddress(transportAddress);

2 ScheduledNodeSampler,默认每隔5s会进行一次对各个节点的请求操作;

连接是何时创建的呢

也是在doSampler调用,最终由NettryTransport创建

这个时候发现,如果是light则创建轻连接,也就是,否则创建fully connect,其中包括

  • recovery:做数据恢复recovery,默认个数2个;
  • bulk:用于bulk请求,默认个数3个;
  • med/reg:典型的搜索和单doc索引,默认个数6个;
  • high:如集群state的发送等,默认个数1个;
  • ping:就是node之间的ping咯。默认个数1个;

对应的代码为:

public void start() {
    List<Channel> newAllChannels = new ArrayList<>();
    newAllChannels.addAll(Arrays.asList(recovery));
    newAllChannels.addAll(Arrays.asList(bulk));
    newAllChannels.addAll(Arrays.asList(reg));
    newAllChannels.addAll(Arrays.asList(state));
    newAllChannels.addAll(Arrays.asList(ping));
    this.allChannels = Collections.unmodifiableList(newAllChannels);
}

 

elasticsearch之下载源码

   第一步:进入github.com    第二步:   第三步:   第四步:   第五步:   第六步:   第七步:认识下es的源码目录结构  查看详情

elasticsearch之源码分析(shard分片规则)

     前期博客是Elasticsearch之源码编译     (1)elasticsearch在建立索引时,根据id或(id,类型)进行hash,得到hash值之后再与该索引的分片数量取模,取模的值即为存入的分片编号。源码:org.elasticsearch.cluste... 查看详情

深入elasticsearch源码之环境搭建(代码片段)

为了研究elasticsearch的源码,可以从github下载源码,到本地编译,但这种方法比较麻烦。我是采用eclipse构建maven项目,在pom文件中引用elasticsearch的jar包的方式来搭建源码阅读的环境。搭建因为我搭建的es集群是2.2.1版本&#... 查看详情

elasticsearch之es学习工作中遇到的坑(陆续更新)

    1:es集群脑裂问题(不要用外网ip,节点角色不要混用)  原因1:阿里云服务器,外网有时候不稳定。    解决方案:单独采购服务器,内网安装  原因2:master和node节点没有分开  解决方案:   ... 查看详情

ffmpeg之ffplay源码简要分析(代码片段)

1ffplay基本架构1.1视频解码播放的基本流程  ffmpeg视频解码播放的基本流程如下图所示:首先对网络媒体数据流进行解封装得到一般的视频封装格式比如MP4等,如果是本地播放的媒体文件就不需要解协议;然后对视... 查看详情

ribbon源码之client(代码片段)

...接口,并且实现了负载均衡功能。客户端模块的核心是IClient接口,定义了客户端网络请求的方法。publicinterfaceIClient<SextendsClientRequest,TextendsIResponse>publicTexecute(Srequest,IClientConfigrequestConfig)throwsExc 查看详情

elasticsearch之javaapi操作es(代码片段)

...增1.5查询操作1JavaAPI1.1pom.xml<dependency><groupId>org.elasticsearch.client</groupId><artifactId>elasticsearch-rest-client</artifactId><version>7.9.0</version></dependency><dependency><groupId>org.elasticsearch</groupId><arti... 查看详情

深入elasticsearch源码之索引过程(代码片段)

调用es2..2.1的javaApi在ES集群中索引一个文档客户端大致流程:使用XContentBuilder构建索引的json串,也可直接用json字符串使用TransportClient连接ES集群发送索引到集群并获取IndexResponse测试代码如下:packageindex;importjava.io.IOExc... 查看详情

cas_client之authenticationfilter源码分析

packageorg.jasig.cas.client.authentication; importjava.io.IOException;importjava.io.PrintStream;importjava.util.Date;importjava.util.HashMap;importjava.util.Map;importjavax.servlet.FilterChain;im 查看详情

memcache-client-forjava源码分析之memcachedcachemanager

接上文《memcache-client-forjava源码分析之DefaultCacheImpl分析》,主要分析ICache另外一个针对Memcached缓存实现,重点实现了memcached的高可用能力。由于底层访问复用了java_memcached-release包的实现,memcache-client-forjava只是在上层做了简单封... 查看详情

hadoop3.1.1源码client详解:packet入队后消息系统运作之datastreamer(packet发送):主干

...列总览: Hadoop3.1.1架构体系——设计原理阐述与Client源码图文详解:总览在上一章(Hadoop3.1.1源码Client详解:写入准备-RPC调用与流的建立) 我们提到,在输出流DFSOutputStream创建后,DataStreamer也随之创建,并且被启动下... 查看详情

elasticstack系列之十六&elasticsearch5.xindex/create和update源码分析

开篇  在ElasticSearch系列十四中提到的问题即ElasticStack系列之十四&ElasticSearch5.xbulkupdate中重复id性能骤降,继续这个问题再继续查看更加多的源代码,看看底层在执行index、create和update操作到底有什么不同,有什么可以使得我... 查看详情

elasticsearch源码分析-架构设计之action

参考技术A如果我们发送如下的http请求,elasticsearch是如何匹配对应Action的呢?在org.elasticsearch.rest.action包下面,存放了处理各种请求的restaction类在每个action的构造方法中,会将当前action对象注册到指定的url上在registerHandler()方法... 查看详情

springboot整合elasticsearch之javahighlevelrestclient(代码片段)

1搭建SpringBoot工程2引入ElasticSearch相关坐标。<properties> <!--一定重新定义版本版本号一定要和您所安装的ES版本号一致--><elasticsearch.version>7.4.0</elasticsearch.version></properties><dependencies><!--引入es的坐标-->... 查看详情

elasticsearch源码之元数据culsterstate(代码片段)

1.概述前面几篇讲述了es的许多重要的逻辑,这些逻辑都涉及到一个非常核心的类ClusterState,本文来看下ClusterState包括什么信息。2.ClusterState从注解中我们看到,CusterState表示整个集群的状态,其中的数据都是不可变的(除了Routing... 查看详情

elasticsearch线程池类型分析之sizeblockingqueue(代码片段)

ElasticSearch线程池类型分析之SizeBlockingQueue尽管前面写好几篇ES线程池分析的文章(见文末参考链接),但都不太满意。但从ES的线程池中了解到了不少JAVA线程池的使用技巧,于是忍不住再写一篇(ES6.3.2版本的源码)。文中给出的... 查看详情

hadoop源码解析之rpc通信client到server通信

rpc是Hadoop分布式底层通信的基础,无论是client和namenode,namenode和datanode,以及yarn新框架之间的通信模式等等都是采用的rpc方式。下面我们来概要分析一下Hadoop2的rpc。Hadoop通信模式主要是C/S方式,及客户端和服务端的模式。客户... 查看详情

elasticsearch源码更新性能分析(代码片段)

带着疑问学源码,第三篇:Elasticsearch更新性能代码分析基于:https://github.com/jiankunking/elasticsearchElasticsearch7.10.2+目的在看源码之前先梳理一下,自己对于更新疑惑的点:为什么Elasticsearch更新与写入的性能会... 查看详情