kubernetes深入学习之二:编译和部署镜像(api-server)(代码片段)

程序员欣宸 程序员欣宸     2022-12-02     461

关键词:

欢迎访问我的GitHub

本篇概览

  • 本文是《Kubernetes深入学习》系列的第二篇,上一章我们下载了Kubernetes1.13源码,然后修改kubectl源码再构建运行进行验证,在整个源码包中,除了kubectl这样的可执行程序,还有api-server、controller-manager这些docker容器,今天的实战是修改这些容器镜像的源码,再部署新的镜像,验证我们修改的代码是否生效;

环境信息

  • 为了验证修改的结果在Kubernetes环境是否生效,需要您准备好Kubernetes1.13版本的环境,实战中涉及的应用和版本信息如下:
    1. 操作系统:CentOS 7.6.1810
    2. go版本:1.12
    3. Docker:17.03.2-ce
    4. Kubernetes:1.13

关于依赖镜像的下载

  • 在编译过程中会用到以下三个镜像,但是docker pull命令是无法下载到这些镜像的(你懂的):

    1. k8s.gcr.io/kube-cross:v1.11.5-1
    2. k8s.gcr.io/debian-iptables-amd64:v11.0
    3. k8s.gcr.io/debian-base-amd64:0.4.0
  • 如果您的环境无法下载这三个镜像,可通过以下方式来下载:
  • 执行以下命令,下载我上传的三个镜像:
docker pull bolingcavalry/kube-cross:v1.11.5-1 \\
&& docker pull bolingcavalry/debian-iptables-amd64:v11.0 \\
&& docker pull bolingcavalry/debian-base-amd64:0.4.0
  • 下载完毕后,通过docker images命令可以看到这三个镜像:
[root@hedy kubernetes]# docker images
REPOSITORY                            TAG                 IMAGE ID            CREATED             SIZE
bolingcavalry/kube-cross              v1.11.5-1           b16987a9b305        7 weeks ago         1.75 GB
bolingcavalry/debian-iptables-amd64   v11.0               48319fdf4d25        4 months ago        45.4 MB
bolingcavalry/debian-base-amd64       0.4.0               8021d54711e6        4 months ago        42.3 MB
  • 执行以下命令,将下载的镜像更名,并且删除不再用到的镜像:
docker tag b16987a9b305 k8s.gcr.io/kube-cross:v1.11.5-1 \\
&& docker tag 48319fdf4d25 k8s.gcr.io/debian-iptables-amd64:v11.0 \\
&& docker tag 8021d54711e6 k8s.gcr.io/debian-base-amd64:0.4.0 \\
&& docker rmi bolingcavalry/kube-cross:v1.11.5-1 \\
&& docker rmi bolingcavalry/debian-iptables-amd64:v11.0 \\
&& docker rmi bolingcavalry/debian-base-amd64:0.4.0
  • 此时再执行docker images查看本地镜像,可见正是编译所需那三个:
[root@hedy kubernetes]# docker images
REPOSITORY                         TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-cross              v1.11.5-1           b16987a9b305        7 weeks ago         1.75 GB
k8s.gcr.io/debian-iptables-amd64   v11.0               48319fdf4d25        4 months ago        45.4 MB
k8s.gcr.io/debian-base-amd64       0.4.0               8021d54711e6        4 months ago        42.3 MB
  • 打开文件build/lib/release.sh,找到下面这段内容,将其中的---pull删除,这样就不会重新去远程下载镜像了:
"$DOCKER[@]" build --pull -q -t "$docker_image_tag" "$docker_build_path" >/dev/null
  • 这段代码的具体位置如下图绿框所示,将绿框中的内容删除:

  • 至此准备工作已结束,接下来就是修改了;

修改源码

  • 接下来的工作是修改源码,本次实战要修改的是api-server的源码,我们在里面加一些日志,最后在验证环节只要能看见这些日志就说明我们修改的源码可以成功运行;
  • 修改的文件是create.go路径如下,这个文件是创建资源的响应入口:
$GOPATH/src/k8s.io/kubernetes/staging/src/k8s.io/apiserver/pkg/endpoints/handlers/create.go
  • 在create.go处理请求的位置增加日志代码,如下所示,所有fmt.Println的调用都是本次新增的内容:
func createHandler(r rest.NamedCreater, scope RequestScope, admit admission.Interface, includeName bool) http.HandlerFunc 
    return func(w http.ResponseWriter, req *http.Request) 
        fmt.Println("***********************************************************************************************")
        fmt.Println("start create", req)
        fmt.Println("-----------------------------------------------------------------------------------------------")
        fmt.Printf("%s\\n", debug.Stack())
        fmt.Println("***********************************************************************************************")
  • 上述代码的作用是在api-server接收到创建资源的请求时打印日志,日志内容是http请求内容和当前方法的调用堆栈打印出来;

开始构建

  • 进入目录$GOPATH/src/k8s.io/kubernetes,执行以下命令开始构建镜像:
KUBE_BUILD_PLATFORMS=linux/amd64 KUBE_BUILD_CONFORMANCE=n KUBE_BUILD_HYPERKUBE=n make release-images
  • 根据build/root/Makefile中的描述,KUBE_BUILD_CONFORMANCE参数用来控制是否创建一致性测试镜像,KUBE_BUILD_HYPERKUBE控制是否创建hyperkube镜像(各种工具集成在一起),这两个目前都用不上,因此是设置为"n"表示不构建;

  • 大约10多分钟后,镜像构建成功,控制台输出如下:
[root@hedy kubernetes]# KUBE_BUILD_PLATFORMS=linux/amd64 KUBE_BUILD_CONFORMANCE=n KUBE_BUILD_HYPERKUBE=n make release-images
+++ [0316 19:11:40] Verifying Prerequisites....
+++ [0316 19:11:40] Building Docker image kube-build:build-b58720d1c7-5-v1.11.5-1
+++ [0316 19:15:46] Creating data container kube-build-data-b58720d1c7-5-v1.11.5-1
+++ [0316 19:17:02] Syncing sources to container
+++ [0316 19:17:11] Running build command...
+++ [0316 19:17:21] Building go targets for linux/amd64:
    ./vendor/k8s.io/code-generator/cmd/deepcopy-gen
+++ [0316 19:17:28] Building go targets for linux/amd64:
    ./vendor/k8s.io/code-generator/cmd/defaulter-gen
+++ [0316 19:17:34] Building go targets for linux/amd64:
    ./vendor/k8s.io/code-generator/cmd/conversion-gen
+++ [0316 19:17:43] Building go targets for linux/amd64:
    ./vendor/k8s.io/kube-openapi/cmd/openapi-gen
2019/03/16 19:17:51 Code for OpenAPI definitions generated
+++ [0316 19:17:52] Building go targets for linux/amd64:
    ./vendor/github.com/jteeuwen/go-bindata/go-bindata
+++ [0316 19:17:53] Building go targets for linux/amd64:
    cmd/cloud-controller-manager
    cmd/kube-apiserver
    cmd/kube-controller-manager
    cmd/kube-scheduler
    cmd/kube-proxy
+++ [0316 19:20:41] Syncing out of container
+++ [0316 19:20:55] Building images: linux-amd64
+++ [0316 19:20:56] Starting docker build for image: cloud-controller-manager-amd64
+++ [0316 19:20:56] Starting docker build for image: kube-apiserver-amd64
+++ [0316 19:20:56] Starting docker build for image: kube-controller-manager-amd64
+++ [0316 19:20:56] Starting docker build for image: kube-scheduler-amd64
+++ [0316 19:20:56] Starting docker build for image: kube-proxy-amd64
+++ [0316 19:21:37] Deleting docker image k8s.gcr.io/kube-proxy:v1.13.5-beta.0.7_6c1e64b94a3e11-dirty
+++ [0316 19:21:41] Deleting docker image k8s.gcr.io/kube-scheduler:v1.13.5-beta.0.7_6c1e64b94a3e11-dirty
+++ [0316 19:21:42] Deleting docker image k8s.gcr.io/cloud-controller-manager:v1.13.5-beta.0.7_6c1e64b94a3e11-dirty
+++ [0316 19:21:42] Deleting docker image k8s.gcr.io/kube-controller-manager:v1.13.5-beta.0.7_6c1e64b94a3e11-dirty
+++ [0316 19:21:44] Deleting docker image k8s.gcr.io/kube-apiserver:v1.13.5-beta.0.7_6c1e64b94a3e11-dirty
+++ [0316 19:21:48] Docker builds done
  • 在目录下可见构建的tar文件,可以通过docker load命令加载到本地镜像仓库使用:
[root@hedy amd64]# cd $GOPATH/src/k8s.io/kubernetes/_output/release-images/amd64
[root@hedy amd64]# ls
cloud-controller-manager.tar  kube-apiserver.tar  kube-controller-manager.tar  kube-proxy.tar  kube-scheduler.tar
  • 将新生成的kube-apiserver.tar上传到kubernetes环境的master节点;
  • 执行命令docker load < kube-apiserver.tar,将文件kube-apiserver.tar导入本地镜像仓库;
  • 执行命令docker images,如下所示,可见本地仓库多了个TAG为v1.13.5-beta.0.7_6c1e64b94a3e11-dirty的kube-apiserver镜像:
[root@master 16]# docker load < kube-apiserver.tar
efd6f8f1a8c2: Loading layer [==================================================>]  138.5MB/138.5MB
Loaded image: k8s.gcr.io/kube-apiserver:v1.13.5-beta.0.7_6c1e64b94a3e11-dirty
[root@master 16]# docker images
REPOSITORY                           TAG                                     IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-apiserver            v1.13.5-beta.0.7_6c1e64b94a3e11-dirty   c9482a699ba7        About an hour ago   181MB
quay.io/coreos/flannel               v0.11.0-amd64                           ff281650a721        6 weeks ago         52.6MB
k8s.gcr.io/kube-proxy                v1.13.0                                 8fa56d18961f        3 months ago        80.2MB
k8s.gcr.io/kube-scheduler            v1.13.0                                 9508b7d8008d        3 months ago        79.6MB
k8s.gcr.io/kube-controller-manager   v1.13.0                                 d82530ead066        3 months ago        146MB
k8s.gcr.io/kube-apiserver            v1.13.0                                 f1ff9b7e3d6e        3 months ago        181MB
k8s.gcr.io/coredns                   1.2.6                                   f59dcacceff4        4 months ago        40MB
k8s.gcr.io/etcd                      3.2.24                                  3cab8e1b9802        5 months ago        220MB
k8s.gcr.io/pause                     3.1                                     da86e6ba6ca1        15 months ago       742kB
  • 先看看当前的api-server这个Pod的基本情况,命令是kubectl describe pod kube-apiserver-master -n kube-system,如下所示,当前的镜像是k8s.gcr.io/kube-apiserver:v1.13.0
[root@master 16]# kubectl describe pod kube-apiserver-master -n kube-system
Name:               kube-apiserver-master
Namespace:          kube-system
Priority:           2000000000
PriorityClassName:  system-cluster-critical
Node:               master/192.168.182.130
Start Time:         Sat, 16 Mar 2019 21:53:22 +0800
Labels:             component=kube-apiserver
                    tier=control-plane
Annotations:        kubernetes.io/config.hash: 38da173e77f3fd0c39712abbb79b5529
                    kubernetes.io/config.mirror: 38da173e77f3fd0c39712abbb79b5529
                    kubernetes.io/config.seen: 2019-02-23T13:46:43.135821321+08:00
                    kubernetes.io/config.source: file
                    scheduler.alpha.kubernetes.io/critical-pod: 
Status:             Running
IP:                 192.168.182.130
Containers:
  kube-apiserver:
    Container ID:  docker://cb0234269ee2fbef23078cc1bbf6a2d6edd4b248cb733f793853dbfec2f0d814
    Image:         k8s.gcr.io/kube-apiserver:v1.13.0
  • 修改文件/etc/kubernetes/manifests/kube-apiserver.yaml,修改完毕后,执行命令kubectl apply -f kube-apiserver.yaml使修改生效;

验证源码修改是否生效

  • 执行命令kubectl logs -f kube-apiserver-master -n kube-system查看Pod的日志,内容如下,可见请求的详细信息已经打印出来了,证明之前修改的代码已经生效,这是个系统事件对象的创建请求:
***********************************************************************************************
start create &POST /api/v1/namespaces/kube-system/events HTTP/2.0 2 0 map[Accept:[application/vnd.kubernetes.protobuf, */*] Content-Type:[application/vnd.kubernetes.protobuf] User-Agent:[kubelet/v1.13.3 (linux/amd64) kubernetes/721bfa7] Content-Length:[359] Accept-Encoding:[gzip]] 0xc00ccd0870 <nil> 359 [] false 192.168.182.130:6443 map[] map[] <nil> map[] 192.168.182.131:58558 /api/v1/namespaces/kube-system/events 0xc00908cf20 <nil> <nil> 0xc00ccd0990
-----------------------------------------------------------------------------------------------
goroutine 49344 [running]:
runtime/debug.Stack(0xc007076760, 0x1, 0x1)
    /usr/local/go/src/runtime/debug/stack.go:24 +0xa7
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers.createHandler.func1(0x5da9e80, 0xc00b83ce88, 0xc00bb46b00)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/create.go:49 +0x185
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints.restfulCreateResource.func1(0xc00ccd09f0, 0xc0087d4ae0)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/installer.go:1038 +0xb1
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics.InstrumentRouteFunc.func1(0xc00ccd09f0, 0xc0087d4ae0)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go:225 +0x20d
k8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).dispatch(0xc000120510, 0x7fd553098398, 0xc00b83ce78, 0xc00bb46b00)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:277 +0x9b8
k8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).Dispatch(0xc000120510, 0x7fd553098398, 0xc00b83ce78, 0xc00bb46b00)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:199 +0x57
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3eae926, 0xe, 0xc000120510, 0xc0006e22a0, 0x7fd553098398, 0xc00b83ce78, 0xc00bb46b00)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:146 +0x4b1
k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/apiserver.(*proxyHandler).ServeHTTP(0xc0002cc230, 0x7fd553098398, 0xc00b83ce78, 0xc00bb46b00)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/apiserver/handler_proxy.go:90 +0x16a
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00a07f740, 0x7fd553098398, 0xc00b83ce78, 0xc00bb46b00)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:248 +0x394
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc008edc9a0, 0x7fd553098398, 0xc00b83ce78, 0xc00bb46b00)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:234 +0x8a
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3eb1a2a, 0xf, 0xc008d095f0, 0xc008edc9a0, 0x7fd553098398, 0xc00b83ce78, 0xc00bb46b00)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:154 +0x661
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd553098398, 0xc00b83ce78, 0xc00bb46b00)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authorization.go:64 +0x4c3
net/http.HandlerFunc.ServeHTTP(0xc008eea740, 0x7fd553098398, 0xc00b83ce78, 0xc00bb46b00)
    /usr/local/go/src/net/http/server.go:1964 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd553098398, 0xc00b83ce78, 0xc00bb46b00)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/maxinflight.go:160 +0x3ff
net/http.HandlerFunc.ServeHTTP(0xc008ef11d0, 0x7fd553098398, 0xc00b83ce78, 0xc00bb46b00)
    /usr/local/go/src/net/http/server.go:1964 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd553098398, 0xc00b83ce78, 0xc00bb46b00)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/impersonation.go:50 +0x1eeb
net/http.HandlerFunc.ServeHTTP(0xc008eea780, 0x7fd553098398, 0xc00b83ce78, 0xc00bb46b00)
    /usr/local/go/src/net/http/server.go:1964 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd553098398, 0xc00b83ce78, 0xc00bb46a00)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authentication.go:81 +0x456
net/http.HandlerFunc.ServeHTTP(0xc008ebd1d0, 0x7fd553098398, 0xc00b83ce78, 0xc00bb46a00)
    /usr/local/go/src/net/http/server.go:1964 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005f2ccc0, 0xc008f1c2e0, 0x5db4f80, 0xc00b83ce78, 0xc00bb46a00)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:108 +0xb3
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:97 +0x1b0

***********************************************************************************************
  • 接下来我们自己创建个rc资源试试,新开一个控制台窗口连接Kubernetes的master,输入以下命令创建一个名为nginx-rc.yaml的文件,内容是nginx的rc:
tee nginx-rc.yaml <<-EOF
apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx-controller
spec:
  replicas: 2
  selector:
    name: nginx
  template:
    metadata:
      labels:
        name: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:latest
          imagePullPolicy: Never
          ports:
            - containerPort: 80
EOF
  • 在nginx-rc.yaml所在目录执行命令kubectl apply -f nginx-rc.yaml,即可创建资源;
  • 在输出api-server日志的窗口可见如下内容,就是我们刚刚创建的rc资源:
***********************************************************************************************
start create &POST /api/v1/namespaces/default/replicationcontrollers HTTP/2.0 2 0 map[Accept:[application/json] Content-Type:[application/json] User-Agent:[kubectl/v1.13.3 (linux/amd64) kubernetes/721bfa7] Content-Length:[818] Accept-Encoding:[gzip]] 0xc004b4dfb0 <nil> 818 [] false 192.168.182.130:6443 map[] map[] <nil> map[] 192.168.182.130:57856 /api/v1/namespaces/default/replicationcontrollers 0xc007b83600 <nil> <nil> 0xc004bc40f0
-----------------------------------------------------------------------------------------------
goroutine 133183 [running]:
runtime/debug.Stack(0xc00a08c760, 0x1, 0x1)
    /usr/local/go/src/runtime/debug/stack.go:24 +0xa7
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers.createHandler.func1(0x5da9e80, 0xc006e07e58, 0xc002cc0100)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/create.go:49 +0x185
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints.restfulCreateResource.func1(0xc004bc4150, 0xc00a435680)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/installer.go:1038 +0xb1
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics.InstrumentRouteFunc.func1(0xc004bc4150, 0xc00a435680)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go:225 +0x20d
k8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).dispatch(0xc000120510, 0x7fd553098398, 0xc006e07e48, 0xc002cc0100)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:277 +0x9b8
k8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).Dispatch(0xc000120510, 0x7fd553098398, 0xc006e07e48, 0xc002cc0100)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:199 +0x57
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3eae926, 0xe, 0xc000120510, 0xc0006e22a0, 0x7fd553098398, 0xc006e07e48, 0xc002cc0100)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:146 +0x4b1
k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/apiserver.(*proxyHandler).ServeHTTP(0xc0002cc230, 0x7fd553098398, 0xc006e07e48, 0xc002cc0100)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/apiserver/handler_proxy.go:90 +0x16a
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00a07f740, 0x7fd553098398, 0xc006e07e48, 0xc002cc0100)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:248 +0x394
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc008edc9a0, 0x7fd553098398, 0xc006e07e48, 0xc002cc0100)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:234 +0x8a
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x3eb1a2a, 0xf, 0xc008d095f0, 0xc008edc9a0, 0x7fd553098398, 0xc006e07e48, 0xc002cc0100)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:154 +0x661
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fd553098398, 0xc006e07e48, 0xc002cc0100)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authorization.go:64 +0x4c3
net/http.HandlerFunc.ServeHTTP(0xc008eea740, 0x7fd553098398, 0xc006e07e48, 0xc002cc0100)
    /usr/local/go/src/net/http/server.go:1964 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fd553098398, 0xc006e07e48, 0xc002cc0100)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/maxinflight.go:160 +0x3ff
net/http.HandlerFunc.ServeHTTP(0xc008ef11d0, 0x7fd553098398, 0xc006e07e48, 0xc002cc0100)
    /usr/local/go/src/net/http/server.go:1964 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fd553098398, 0xc006e07e48, 0xc002cc0100)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/impersonation.go:50 +0x1eeb
net/http.HandlerFunc.ServeHTTP(0xc008eea780, 0x7fd553098398, 0xc006e07e48, 0xc002cc0100)
    /usr/local/go/src/net/http/server.go:1964 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fd553098398, 0xc006e07e48, 0xc002cc0000)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authentication.go:81 +0x456
net/http.HandlerFunc.ServeHTTP(0xc008ebd1d0, 0x7fd553098398, 0xc006e07e48, 0xc002cc0000)
    /usr/local/go/src/net/http/server.go:1964 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a28ae40, 0xc008f1c2e0, 0x5db4f80, 0xc006e07e48, 0xc002cc0000)
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:108 +0xb3
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:97 +0x1b0

***********************************************************************************************
  • 至此,Kubernetes的镜像的源码的修改、构建、运行实战就全部完成了,在学习源码的过程中如果遇到了有兴趣或有疑惑的代码,您不妨也尝试一下;

欢迎关注51CTO博客:程序员欣宸

elasticsearch学习之深入聚合分析二---案例实战

  以一个家电卖场中的电视销售数据为背景,来对各种品牌,各种颜色的电视的销量和销售额,进行各种各样角度的分析,首先建立电视销售的索引,然后添加几条销售记录PUT/tvs{"mappings":{"sales":{"properties":{"price":{"type":"long"},"c... 查看详情

elasticsearch学习之深入搜索二---搜索底层原理剖析

1. 普通match如何转换为term+should{  "match":{"title":"javaelasticsearch"}}使用诸如上面的matchquery进行多值搜索的时候,es会在底层自动将这个matchquery转换为bool的语法,boolshould,指定多个搜索词,同时使用termquery{"bool":{"should":[{"term":... 查看详情

scala深入学习之函数学习(代码片段)

目录一、函数的定义二、匿名函数三、递归函数四、无参函数五、方法和函数的区别联系一、函数的定义代码示例:packagefunctionDemo/***@author:蔡政洁*@email:caizhengjie888@icloud.com*@date:2020/8/23*@time:2:14下午*/objectFunction... 查看详情

k8s学习之体系架构介绍

K8S学习之体系架构介绍Kubernetes整体架构master构成worker构成Controller控制器其他Label参考链接Kubernetes整体架构内容介绍每个Kubernetes集群都有两种类型的节点:master和worker。master负责控制和监控worker工作集群整体组成master构成整... 查看详情

docker学习之——获取和推送镜像

1.查找镜像  第一种:    通过Docker的官网:https://registry.hub.docker.com  第二种:   通过dockersearch命令:            --automated:只显示出自动化构建出的镜像      --no-trunc:不以截段的方式显示... 查看详情

rabbitmq学习之集群部署

我们先搭建一个普通集群模式,在这个模式基础上再配置镜像模式实现高可用,Rabbit集群前增加一个反向代理,生产者、消费者通过反向代理访问RabbitMQ集群。架构图如下:设计架构可以如下:在一个集群里,有4台机器,其中1... 查看详情

kubernetes学习之入门篇

本篇内容是在公司做技术分享时的PPT,主要内容:Kubernetes&Docker简介容器技术基础知识介绍Kubernetes核心设计浅析详细内容请看PPT:https://download.csdn.net/download/u010657094/13078830或者:https://github.com/ByrsH/doc/blo 查看详情

kubernetes学习之入门篇

...篇内容是在公司做技术分享时的PPT,主要内容:Kubernetes&Docker简介容器技术基础知识介绍Kubernetes核心设计浅析详细内容请看PPT:https://download.csdn.net/download/u010657094/13078830或者:https://github.com/ByrsH/doc/blob/main/Kubern... 查看详情

spring学习之二(spring和hibernate整合)

Spring学习之二(Spring和hibernate整合)  如果你不明白hibernate的搭建可以参考​​《Hibernate学习路程之一(Hibernate环境的搭建)》​​ Hibernate主配置文件包含三部分内容1>连接数据库的信息2>自己的属性配置3>映射... 查看详情

linux学习-kubernetes学习之kubectl命令(代码片段)

kubernetes版本1.22.3问题1:Errorfromserver:Get"https://192.168.88.103:10250/containerLogs/default/client/client":dialtcp192.168.88.103:10250:connect:noroutetohost解决方法#经查看因为192.168.88.103上面fi 查看详情

kubernetesinaction:2开始使用kubernetes和docker(代码片段)

2、开始使用Kubernetes和Docker说明本章内容涵盖使用Docker创建、运行及共享容器镜像在本地部署单节点的Kubernetes集群在GoogleKubernetesEngine上部署Kubernetes集群配置和使用命令行客户端——kubectl在Kubernetes上部署应用并进行水平伸缩在... 查看详情

elasticsearch学习之深入聚合分析1--基本概念

首先明白两个核心概念:bucket和metric1.bucket:一个数据分组1cityname23北京小李4北京小王5上海小张6上海小丽7上海小陈基于city划分buckets,划分出来两个bucket,一个是北京bucket,一个是上海bucket北京bucket:包含了2个人,小李,小王... 查看详情

docker学习之镜像

Docker运行容器前需要本地存在对应的镜像,如果本地不存在该镜像,Docker会从镜像仓库下载该镜像。获取镜像#dockerpullubuntu:16.04运行#dockerrun-itubuntu:16.04bash-i交互式操作-t建立终端bash进入交互式shell[email protected]:/# iduid=0(root... 查看详情

elasticsearch学习之深入搜索六---平衡搜索结果的精准率和召回率

1.召回率和精准度  比如你搜索一个javaspark,总共有100个doc,能返回多少个doc作为结果,就是召回率,recall精准度,比如你搜索一个javaspark,能不能尽可能让包含javaspark,或者是java和spark离的很近的doc,排在最前面,precision直... 查看详情

javase学习之注解

JavaSE注解(Annotation)JDK5.0开始,Java增加了对元数据(MetaData)的支持,也就是Annotation(注解)Annotation其实就是代码里的特殊标记,这些标记可以在编译,类加载,运行时被读取,并执行相应的处理。通过使用Annotation,程序员可以在不... 查看详情

从零入门机器学习之基础概念讲解:深入浅出讲解计算机基本概念

  大家好,我是herosunly。985院校硕士毕业,现担任算法研究员一职,热衷于机器学习算法研究与应用。曾获得阿里云天池比赛第一名,科大讯飞比赛第三名,CCF比赛第四名。拥有多项发明专利。对机器学习和深度学习拥有自己... 查看详情

从零入门机器学习之基础概念讲解:深入浅出讲解计算机基本概念

  大家好,我是herosunly。985院校硕士毕业,现担任算法研究员一职,热衷于机器学习算法研究与应用。曾获得阿里云天池比赛第一名,科大讯飞比赛第三名,CCF比赛第四名。拥有多项发明专利。对机器学习和深度学习拥有自己... 查看详情

elasticsearch学习之深入搜索五---phrasematching搜索技术

1.近似匹配什么是近似匹配,两个句子javaismyfavouriteprogramminglanguage,andIalsothinksparkisaverygoodbigdatasystem.javasparkareveryrelated,becausescalaisspark‘sprogramminglanguageandscalaisalsobasedonjvmlikejava.match 查看详情