{"msg":"操作成功","code":200,"data":{"createBy":"admin","createTime":"2021-07-17 17:39:32","updateBy":"admin","updateTime":"2021-07-17 17:39:32","remark":null,"id":65,"articleTitle":"Kubernetes（三）二进制方式搭建集群","articleUrl":"k8s_binary_cluster","articleThumbnail":"https://www.asumimoe.com/imgfiles/20220906/f93daad129a04b8db74eed70cd45263b.png","articleFlag":"1","draftStatus":"1","reprintStatement":"0","articleSummary":"Kubernetes除使用kubeadm部署意外，生产中常用的是二进制的方式进行部署。","articleContent":"## 平台架构\n![K8s集群但Master节点架构图](https://www.asumimoe.com/imgfiles/20220907/b4688a3614f4444dbec65624da72ec87.png)\n### Master节点\n\nMaster节点上面主要由四个模块组成，APIServer，schedule,controller-manager,etcd.\n\n**apiserver**\napiserver负责对外提供RESTful的kubernetes API的服务，它是系统管理指令的统一接口，任何对资源的增删该查都要交给apiserver处理后再交给Etcd，如图，kubectl(kubernetes提供的客户端工具，该工具内部是对kubernetes API的调用）是直接和apiserver交互的\n\n**scheduler**\nscheduler负责调度Pod到合适的Node上，如果把scheduler看成一个黑匣子，那么它的输入是pod和由多个Node组成的列表，输出是Pod和一个Node的绑定。kubernetes目前提供了调度算法，同样也保留了接口。用户根据自己的需求定义自己的调度算法，\n\n**controller manager**\n如果APIServer做的是前台的工作的话，那么controller manager就是负责后台的。每一个资源都对应一个控制器。而controller manager就是负责管理这些控制器的，比如我们通过apiserver创建了一个Pod，当这个Pod创建成功后，apiserver的任务就算完成了。\n\n**etcd**\nEtcd是一个高可用的键值存储系统，kubernetes使用它来存储各个资源的状态，从而实现了Restful的API。\n\n### Node节点\n\n每个Node节点主要由三个模板组成：kublet，kube-proxy，Docker-Engine\n\n**kube-proxy**\n该模块实现了kubernetes中的服务发现和反向代理功能。kube-proxy支持TCP和UDP连接转发，默认基Round Robin算法将客户端流量转发到与service对应的一组后端pod。服务发现方面，kube-proxy使用etcd的watch机制监控集群中service和endpoint对象数据的动态变化，并且维护一个service到endpoint的映射关系，从而保证了后端pod的IP变化不会对访问者造成影响\n\n**kublet**\nkublet是Master在每个Node节点上面的agent，是Node节点上面最重要的模块，它负责维护和管理该Node上的所有容器，但是如果容器不是通过kubernetes创建的，它并不会管理。本质上，它负责使Pod的运行状态与期望的状态一致\n\n**Docker-Engine**\n容器引擎\n\n### kubernetes网络类型\n\n**Overlay Network**：覆盖网络，在基础网络上叠加的一种 虚拟网络技术模式，该网络中的主机通过虚拟链路连接起来\n**VXLAN**：将源数据包封装到UDP中，并使用基础网络的IP/MAC作为外层报文头进行封装，然后在以太网上传输，到达目的地后由隧道端点解封装并将数据发送给目\n标地址\n**Flannel**：是Overlay网络的一种， 也是将源数据包封装在另一种网络包里面进行路由转发和通信，目前已经支持UDP、VXLAN、 AWS VPC和GCE路由等数据转发方式\n\n## 集群部署\n\n### 集群规划\n\n| 主机名 |     IP地址     | 软件包                                                       |\n| :----: | :------------: | ------------------------------------------------------------ |\n| master | 192.168.52.211 | kube-apiserver、kube-controller-manager、kube-scheduler、etcd |\n| node1  | 192.168.52.212 | kubelet、kube-proxy、docker、etcd、flannel                   |\n| node2  | 192.168.52.213 | kubelet、kube-proxy、docker、etcd、flannel                   |\n\n### 自签证书\n\n|      组件      |                  使用证书                  |\n| :------------: | :----------------------------------------: |\n|      etcd      |     ca.pem、server.pem、server-key.pem     |\n|    flannel     |     ca.pem、server.pem、server-key.pem     |\n| kube-apiserver |     ca.pem、server.pem、server-key.pem     |\n|    kubelet     |             ca.pem、ca-key.pem             |\n|   kube-proxy   | ca.pem、kube-proxy.pem、kube-proxy-key.pem |\n|    kubectl     |      ca.pem、admin.pem、admin-key.pem      |\n\n### etcd集群部署\n\n1）创建工作目录及证书目录\n\n```shell\nmkdir -p k8s/etcd-cert\n```\n\n2）下载cfssl证书创建工具\n\n```shell\ncat cfssl.sh\ncurl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl\ncurl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson\ncurl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo\nchmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo\n```\n\n3）证书制作\n\n```shell\ncd k8s/etcd-cert\n# cat etcd-cert.sh\n#定义ca证书\ncat > ca-config.json <<EOF\n{\n  \"signing\": {\n    \"default\": {\n      \"expiry\": \"87600h\"\n    },\n    \"profiles\": {\n      \"www\": {\n         \"expiry\": \"87600h\",\n         \"usages\": [\n            \"signing\",\n            \"key encipherment\",\n            \"server auth\",\n            \"client auth\"\n        ]\n      }\n    }\n  }\n}\nEOF\n# 实现证书签名\ncat > ca-csr.json <<EOF\n{\n    \"CN\": \"etcd CA\",\n    \"key\": {\n        \"algo\": \"rsa\",\n        \"size\": 2048\n    },\n    \"names\": [\n        {\n            \"C\": \"CN\",\n            \"L\": \"Beijing\",\n            \"ST\": \"Beijing\"\n        }\n    ]\n}\nEOF\n# 生产证书，生成ca-key.pem、ca.pem\ncfssl gencert -initca ca-csr.json | cfssljson -bare ca -\n\n#-----------------------\n# 指定etcd三个节点间的通信验证\ncat > server-csr.json <<EOF\n{\n    \"CN\": \"etcd\",\n    \"hosts\": [\n    \"192.168.52.211\",\n    \"192.168.52.212\",\n    \"192.168.52.213\"\n    ],\n    \"key\": {\n        \"algo\": \"rsa\",\n        \"size\": 2048\n    },\n    \"names\": [\n        {\n            \"C\": \"CN\",\n            \"L\": \"BeiJing\",\n            \"ST\": \"BeiJing\"\n        }\n    ]\n}\nEOF\n# 生产etcd证书，server-key.pem、server.pem\ncfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server\n```\n\n4）下载etcd安装包并解压\n\n地址：https://github.com/etcd-io/etcd/releases\n\n5）创建etcd工作目录，指定配置文件、命令文件、证书目录\n\n```shell\nmkdir -p /opt/etcd/{cfg,bin,ssl}\n```\n\n6）添加etcd执行文件\n\n```shell\ncd etcd-v3.0.10-linux-amd64\nmv etcd etcdctl /opt/etcd/bin\n```\n\n7）证书拷贝到etcd工作目录中\n\n```shell\ncp etcd-cert/*.pem /opt/etcd/ssl\n```\n\n8）创建并执行etcd.sh脚本，生产配置文件，由于目前只有一个节点，无法添加其他节点，执行此脚本时会卡住。\n\n```shell\n#!/bin/bash\n# example: ./etcd.sh etcd01 192.168.52.211 etcd02=https://192.168.52.212:2380,etcd03=https://192.168.52.213:2380\n\nETCD_NAME=$1\nETCD_IP=$2\nETCD_CLUSTER=$3\n\nWORK_DIR=/opt/etcd\n\ncat <<EOF >$WORK_DIR/cfg/etcd\n#[Member]\nETCD_NAME=\"${ETCD_NAME}\"\nETCD_DATA_DIR=\"/var/lib/etcd/default.etcd\"\nETCD_LISTEN_PEER_URLS=\"https://${ETCD_IP}:2380\"\nETCD_LISTEN_CLIENT_URLS=\"https://${ETCD_IP}:2379\"\n\n#[Clustering]\nETCD_INITIAL_ADVERTISE_PEER_URLS=\"https://${ETCD_IP}:2380\"\nETCD_ADVERTISE_CLIENT_URLS=\"https://${ETCD_IP}:2379\"\nETCD_INITIAL_CLUSTER=\"etcd01=https://${ETCD_IP}:2380,${ETCD_CLUSTER}\"\nETCD_INITIAL_CLUSTER_TOKEN=\"etcd-cluster\"\nETCD_INITIAL_CLUSTER_STATE=\"new\"\nEOF\n\ncat <<EOF >/usr/lib/systemd/system/etcd.service\n[Unit]\nDescription=Etcd Server\nAfter=network.target\nAfter=network-online.target\nWants=network-online.target\n\n[Service]\nType=notify\nEnvironmentFile=${WORK_DIR}/cfg/etcd\nExecStart=${WORK_DIR}/bin/etcd \\\n--name=\\${ETCD_NAME} \\\n--data-dir=\\${ETCD_DATA_DIR} \\\n--listen-peer-urls=\\${ETCD_LISTEN_PEER_URLS} \\\n--listen-client-urls=\\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \\\n--advertise-client-urls=\\${ETCD_ADVERTISE_CLIENT_URLS} \\\n--initial-advertise-peer-urls=\\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \\\n--initial-cluster=\\${ETCD_INITIAL_CLUSTER} \\\n--initial-cluster-token=\\${ETCD_INITIAL_CLUSTER_TOKEN} \\\n--initial-cluster-state=new \\\n--cert-file=${WORK_DIR}/ssl/server.pem \\\n--key-file=${WORK_DIR}/ssl/server-key.pem \\\n--peer-cert-file=${WORK_DIR}/ssl/server.pem \\\n--peer-key-file=${WORK_DIR}/ssl/server-key.pem \\\n--trusted-ca-file=${WORK_DIR}/ssl/ca.pem \\\n--peer-trusted-ca-file=${WORK_DIR}/ssl/ca.pem\nRestart=on-failure\nLimitNOFILE=65536\n\n[Install]\nWantedBy=multi-user.target\nEOF\n\nsystemctl daemon-reload\nsystemctl enable etcd\nsystemctl restart etcd\n```\n\n执行上述脚本，由于目前暂时无法连接到etcd02、etcd03节点，所以报错是正常的。\n\n```shell\n./etcd.sh etcd01 192.168.52.211 etcd02=https://192.168.52.212:2380,etcd03=https://192.168.52.213:2380\n```\n\n9）将etcd的工作目录`/opt/etcd`和启动脚本文件`/usr/lib/systemd/system/etcd.service`拷贝到其他节点，将配置文件中的IP修改为对应节点的IP\n\n```shell\ncat /opt/etcd/cfg/etcd\n#[Member]\nETCD_NAME=\"etcd02\" # 节点名称，不能重复\nETCD_DATA_DIR=\"/var/lib/etcd/default.etcd\"\nETCD_LISTEN_PEER_URLS=\"https://192.168.52.212:2380\" # 更改为本节点IP\nETCD_LISTEN_CLIENT_URLS=\"https://192.168.52.212:2379\"\n\n#[Clustering]\nETCD_INITIAL_ADVERTISE_PEER_URLS=\"https://192.168.52.212:2380\"\nETCD_ADVERTISE_CLIENT_URLS=\"https://192.168.52.212:2379\"\n#下面为固定写法，不要修改\nETCD_INITIAL_CLUSTER=\"etcd01=https://192.168.52.211:2380,etcd02=https://192.168.52.212:2380,etcd03=https://192.168.52.213:2380\"\nETCD_INITIAL_CLUSTER_TOKEN=\"etcd-cluster\"\nETCD_INITIAL_CLUSTER_STATE=\"new\"\n```\n\n10）检查集群状态\n\n```shell\ncd /opt/etcd/ssl\n/opt/etcd/bin/etcdctl --ca-flie=ca.pem --cert-flie=server.pem --key-file=server-key.pem --endpoints=\"https://192.168.52.211:2379,https://192.168.52.212:2379,https://192.168.52.213:2379\" cluster-health\n```\n\n### Node节点安装docker\n\n具体步骤略，可参见之前的文章。\n\n### Node节点安装flannel网络组件\n\n1）在master节点添加基础网络到etcd中\n\n```shell\n/opt/etcd/bin/etcdctl --ca-flie=ca.pem --cert-flie=server.pem --key-file=server-key.pem --endpoints=\"https://192.168.52.211:2379,https://192.168.52.212:2379,https://192.168.52.213:2379\" set /coreos.com/network/config '{\"Network\":\"172.17.0.0/16\",\"Backend\":{\"Type\":\"vxlan\"}}'\n```\n\n2）上传flannel软件包到所有node节点并解压\n\n```shell\ntar zxvf flannel-v0.10.0-linux-amd64.tar.gz\n```\n\n3）创建k8s工作目录\n\n```shell\nmkdir -p /opt/kubernetes/{cfg,bin,ssl}\n# 添加执行文件\nmv mk-docker-opt.sh flanneld /opt/kubernetes/bin/\n```\n\n4）启动flannel.sh脚本\n\n```shell\n#!/bin/bash\n\nETCD_ENDPOINTS=${1:-\"http://127.0.0.1:2379\"}\n\ncat <<EOF >/opt/kubernetes/cfg/flanneld\nFLANNEL_OPTIONS=\"--etcd-endpoints=${ETCD_ENDPOINTS} \\\n-etcd-cafile=/opt/etcd/ssl/ca.pem \\\n-etcd-certfile=/opt/etcd/ssl/server.pem \\\n-etcd-keyfile=/opt/etcd/ssl/server-key.pem\"\nEOF\n\ncat <<EOF >/usr/lib/systemd/system/flanneld.service\n[Unit]\nDescription=Flanneld overlay address etcd agent\nAfter=network-online.target network.target\nBefore=docker.service\n\n[Service]\nType=notify\nEnvironmentFile=/opt/kubernetes/cfg/flanneld\nExecStart=/opt/kubernetes/bin/flanneld --ip-masq \\$FLANNEL_OPTIONS\nExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env\nRestart=on-failure\n\n[Install]\nWantedBy=multi-user.target\nEOF\n\nsystemctl daemon-reload\nsystemctl enable flanneld\nsystemctl restart flanneld\n```\n\n启动脚本\n\n```shell\n./flannel.sh https://192.168.52.211:2379,https://192.168.52.212:2379,https://192.168.52.213:2379\n```\n\n5）配置docker，与flannel建立关系\n\n```shell\nvim /usr/lib/systemd/system/docker.service\n#修改为如下\nEnvironmentFile=/run/flannel/subnet.env\nExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock\n#查看子网的文件\ncat /run/flannel/subnet.env\nDOCKER_OPT_BIP=\"--bip=172.17.51.1/24\"\nDOCKER_OPT_IPMASQ=\"--ip-masq=false\"\nDOCKER_OPT_MTU=\"--mtu=1450\"\nDOCKER_NETWORK_OPTIONS=\" --bip=172.17.51.1/24 --ip-masq=false --mtu=1450\"\n#重启docker\nsystemctl daemon-reload\nsystemctl restart docker\n```\n\n### 部署Master组件\n\n1）创建api-server证书目录\n\n```shell\nmkdir k8s-cert\ncd k8s-cert\n```\n\n2）创建证书生成脚本并执行\n\n```shell\nvim k8s-cert.sh\n\ncat > ca-config.json <<EOF\n{\n  \"signing\": {\n    \"default\": {\n      \"expiry\": \"87600h\"\n    },\n    \"profiles\": {\n      \"kubernetes\": {\n         \"expiry\": \"87600h\",\n         \"usages\": [\n            \"signing\",\n            \"key encipherment\",\n            \"server auth\",\n            \"client auth\"\n        ]\n      }\n    }\n  }\n}\nEOF\n\ncat > ca-csr.json <<EOF\n{\n    \"CN\": \"kubernetes\",\n    \"key\": {\n        \"algo\": \"rsa\",\n        \"size\": 2048\n    },\n    \"names\": [\n        {\n            \"C\": \"CN\",\n            \"L\": \"Beijing\",\n            \"ST\": \"Beijing\",\n      \t    \"O\": \"k8s\",\n            \"OU\": \"System\"\n        }\n    ]\n}\nEOF\n\ncfssl gencert -initca ca-csr.json | cfssljson -bare ca -\n\n#-----------------------\n\ncat > server-csr.json <<EOF\n{\n    \"CN\": \"kubernetes\",\n    \"hosts\": [\n      \"10.0.0.1\",\n      \"127.0.0.1\",\n      \"192.168.52.211\", # 多Master节点情况下把节点IP依次写在这里即可\n      \"kubernetes\",\n      \"kubernetes.default\",\n      \"kubernetes.default.svc\",\n      \"kubernetes.default.svc.cluster\",\n      \"kubernetes.default.svc.cluster.local\"\n    ],\n    \"key\": {\n        \"algo\": \"rsa\",\n        \"size\": 2048\n    },\n    \"names\": [\n        {\n            \"C\": \"CN\",\n            \"L\": \"BeiJing\",\n            \"ST\": \"BeiJing\",\n            \"O\": \"k8s\",\n            \"OU\": \"System\"\n        }\n    ]\n}\nEOF\n\ncfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server\n\n#-----------------------\n\ncat > admin-csr.json <<EOF\n{\n  \"CN\": \"admin\",\n  \"hosts\": [],\n  \"key\": {\n    \"algo\": \"rsa\",\n    \"size\": 2048\n  },\n  \"names\": [\n    {\n      \"C\": \"CN\",\n      \"L\": \"BeiJing\",\n      \"ST\": \"BeiJing\",\n      \"O\": \"system:masters\",\n      \"OU\": \"System\"\n    }\n  ]\n}\nEOF\n\ncfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin\n\n#-----------------------\n\ncat > kube-proxy-csr.json <<EOF\n{\n  \"CN\": \"system:kube-proxy\",\n  \"hosts\": [],\n  \"key\": {\n    \"algo\": \"rsa\",\n    \"size\": 2048\n  },\n  \"names\": [\n    {\n      \"C\": \"CN\",\n      \"L\": \"BeiJing\",\n      \"ST\": \"BeiJing\",\n      \"O\": \"k8s\",\n      \"OU\": \"System\"\n    }\n  ]\n}\nEOF\n\ncfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy\n```\n\n3）创建master节点的k8s工作目录，将api-server的证书复制到k8s工作目录中\n\n```shell\nmkdir -p /opt/kubernetes/{cfg,bin,ssl}\ncp k8s-cert/*.pem /opt/kubernetes/ssl/\n```\n\n4）上传kubernetes安装包，并解压\n\n```shell\ntar -zxvf kubernetes-server-linux-amd64.tar.gz\ncd kubernetes/server/bin/\ncp kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/\n```\n\n5）创建管理用户角色\n\n```shell\n# 先随机生成一个序列号\nhead -c 16 /dev/urandom | od -An -t x | tr -d ' '\n4c020df4619ebb37679cc96a90ed5c5b\n# 创建token.csv文件\nvim /opt/kubernetes/cfg/tokrn.csv\n4c020df4619ebb37679cc96a90ed5c5b,kubelet-bootstrap,10001,\"system:kubelet-bootstrap\"\n```\n\n6）准备开启apiserver、scheduler、controller-manager\n\n```shell\n# 上传以下三个脚本\ncat scheduler.sh\n#!/bin/bash\n\nMASTER_ADDRESS=$1\n\ncat <<EOF >/opt/kubernetes/cfg/kube-scheduler\nKUBE_SCHEDULER_OPTS=\"--logtostderr=true \\\\\n--v=4 \\\\\n--master=${MASTER_ADDRESS}:8080 \\\\\n--leader-elect\"\nEOF\n\ncat <<EOF >/usr/lib/systemd/system/kube-scheduler.service\n[Unit]\nDescription=Kubernetes Scheduler\nDocumentation=https://github.com/kubernetes/kubernetes\n[Service]\nEnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler\nExecStart=/opt/kubernetes/bin/kube-scheduler \\$KUBE_SCHEDULER_OPTS\nRestart=on-failure\n[Install]\nWantedBy=multi-user.target\nEOF\n\nsystemctl daemon-reload\nsystemctl enable kube-scheduler\nsystemctl restart kube-scheduler\n\ncat apiserver.sh\n#!/bin/bash\nMASTER_ADDRESS=$1\nETCD_SERVERS=$2\n\ncat <<EOF >/opt/kubernetes/cfg/kube-apiserver\nKUBE_APISERVER_OPTS=\"--logtostderr=true \\\\\n--v=4 \\\\\n--etcd-servers=${ETCD_SERVERS} \\\\\n--bind-address=${MASTER_ADDRESS} \\\\\n--secure-port=6443 \\\\\n--advertise-address=${MASTER_ADDRESS} \\\\\n--allow-privileged=true \\\\\n--service-cluster-ip-range=10.0.0.0/24 \\\\\n--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\\\\n--authorization-mode=RBAC,Node \\\\\n--kubelet-https=true \\\\\n--enable-bootstrap-token-auth \\\\\n--token-auth-file=/opt/kubernetes/cfg/token.csv \\\\\n--service-node-port-range=30000-50000 \\\\\n--tls-cert-file=/opt/kubernetes/ssl/server.pem  \\\\\n--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\\\\n--client-ca-file=/opt/kubernetes/ssl/ca.pem \\\\\n--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\\\\n--etcd-cafile=/opt/etcd/ssl/ca.pem \\\\\n--etcd-certfile=/opt/etcd/ssl/server.pem \\\\\n--etcd-keyfile=/opt/etcd/ssl/server-key.pem\"\nEOF\n\ncat <<EOF >/usr/lib/systemd/system/kube-apiserver.service\n[Unit]\nDescription=Kubernetes API Server\nDocumentation=https://github.com/kubernetes/kubernetes\n[Service]\nEnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver\nExecStart=/opt/kubernetes/bin/kube-apiserver \\$KUBE_APISERVER_OPTS\nRestart=on-failure\n[Install]\nWantedBy=multi-user.target\nEOF\n\nsystemctl daemon-reload\nsystemctl enable kube-apiserver\nsystemctl restart kube-apiserver\n\ncat controller-manager.sh\n#!/bin/bash\nMASTER_ADDRESS=$1\n\ncat <<EOF >/opt/kubernetes/cfg/kube-controller-manager\nKUBE_CONTROLLER_MANAGER_OPTS=\"--logtostderr=true \\\\\n--v=4 \\\\\n--master=${MASTER_ADDRESS}:8080 \\\\\n--leader-elect=true \\\\\n--address=127.0.0.1 \\\\\n--service-cluster-ip-range=10.0.0.0/24 \\\\\n--cluster-name=kubernetes \\\\\n--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\\\\n--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \\\\\n--root-ca-file=/opt/kubernetes/ssl/ca.pem \\\\\n--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\\\\n--experimental-cluster-signing-duration=87600h0m0s\"\nEOF\n\ncat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service\n[Unit]\nDescription=Kubernetes Controller Manager\nDocumentation=https://github.com/kubernetes/kubernetes\n[Service]\nEnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager\nExecStart=/opt/kubernetes/bin/kube-controller-manager \\$KUBE_CONTROLLER_MANAGER_OPTS\nRestart=on-failure\n[Install]\nWantedBy=multi-user.target\nEOF\n\nsystemctl daemon-reload\nsystemctl enable kube-controller-manager\nsystemctl restart kube-controller-manager\n```\n\n执行脚本开启服务\n\n```shell\n./apiserver.sh 192.168.52.211 https://192.168.52.211:2379,https://192.168.52.212:2379,https://192.168.52.213:2379\n./scheduler.sh 127.0.0.1\n./controller-manage.sh 127.0.0.1\n```\n\n7）查看master节点状态，显示ok或者true为正常\n\n```shell\n/opt/kubernetes/bin/kubectl get cs\n```\n\n### 安装Node组件\n\n1）将k8s安装包内的kubelet和kube-proxy拷贝到两个node节点。\n\n2）master节点创建kubeconfig目录，添加脚本，执行完成后将文件复制到node节点上。\n\nkubeconfig是为访问集群所做的配置\n\n```shell\nmkdir k8s/kubeconfig\ncd k8s/kubeconfig\nvim kubeconfig.sh\n# 创建 TLS Bootstrapping Token\n#BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')\nBOOTSTRAP_TOKEN=0fb61c46f8991b718eb38d27b605b008\n\ncat > token.csv <<EOF\n${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,\"system:kubelet-bootstrap\"\nEOF\n\n#----------------------\n\nAPISERVER=$1\nSSL_DIR=$2\n\n# 创建kubelet bootstrapping kubeconfig \nexport KUBE_APISERVER=\"https://$APISERVER:6443\"\n\n# 设置集群参数\nkubectl config set-cluster kubernetes \\\n  --certificate-authority=$SSL_DIR/ca.pem \\\n  --embed-certs=true \\\n  --server=${KUBE_APISERVER} \\\n  --kubeconfig=bootstrap.kubeconfig\n\n# 设置客户端认证参数\nkubectl config set-credentials kubelet-bootstrap \\\n  --token=${BOOTSTRAP_TOKEN} \\\n  --kubeconfig=bootstrap.kubeconfig\n\n# 设置上下文参数\nkubectl config set-context default \\\n  --cluster=kubernetes \\\n  --user=kubelet-bootstrap \\\n  --kubeconfig=bootstrap.kubeconfig\n\n# 设置默认上下文\nkubectl config use-context default --kubeconfig=bootstrap.kubeconfig\n\n#----------------------\n\n# 创建kube-proxy kubeconfig文件\n\nkubectl config set-cluster kubernetes \\\n  --certificate-authority=$SSL_DIR/ca.pem \\\n  --embed-certs=true \\\n  --server=${KUBE_APISERVER} \\\n  --kubeconfig=kube-proxy.kubeconfig\n\nkubectl config set-credentials kube-proxy \\\n  --client-certificate=$SSL_DIR/kube-proxy.pem \\\n  --client-key=$SSL_DIR/kube-proxy-key.pem \\\n  --embed-certs=true \\\n  --kubeconfig=kube-proxy.kubeconfig\n\nkubectl config set-context default \\\n  --cluster=kubernetes \\\n  --user=kube-proxy \\\n  --kubeconfig=kube-proxy.kubeconfig\n\nkubectl config use-context default --kubeconfig=kube-proxy.kubeconfig\n```\n\n执行脚本，并将生成的脚本拷贝到两个node节点\n\n```shell\n./kubeconfig.sh 192.168.52.211 /root/k8s/k8s-cert\n#并将生成的bootstrap.kubeconfig与kube-proxy.kubeconfig拷贝到node节点/opt/kubernetes/cfg目录下。\n#创建bootstrap角色赋予权限用于连接apiserver请求签名\nkubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap\n```\n\n3）node1节点安装启动kubelet\n\n```shell\ncat kubelet.sh\n#!/bin/bash\nNODE_ADDRESS=$1\n\ncat <<EOF >/opt/kubernetes/cfg/kube-proxy\nKUBE_PROXY_OPTS=\"--logtostderr=true \\\\\n--v=4 \\\\\n--hostname-override=${NODE_ADDRESS} \\\\\n--cluster-cidr=10.0.0.0/24 \\\\\n--proxy-mode=ipvs \\\\\n--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig\"\nEOF\n\ncat <<EOF >/usr/lib/systemd/system/kube-proxy.service\n[Unit]\nDescription=Kubernetes Proxy\nAfter=network.target\n\n[Service]\nEnvironmentFile=-/opt/kubernetes/cfg/kube-proxy\nExecStart=/opt/kubernetes/bin/kube-proxy \\$KUBE_PROXY_OPTS\nRestart=on-failure\n\n[Install]\nWantedBy=multi-user.target\nEOF\n\nsystemctl daemon-reload\nsystemctl enable kube-proxy\nsystemctl restart kube-proxy\n```\n\n```shell\n./kubelet.sh 192.168.52.212\n```\n\n4）master节点可以看到node1节点的证书请求\n\n```shell\nkubectl get csr\nNAME                                                   AGE    REQUESTOR           CONDITION\nnode-csr-bPGome_z3ZBCFpug_FyVVoOXCYFuID6MmCO5ymtDQpQ   2m1s   kubelet-bootstrap   Pending\t \n# pending等待集群给该节点颁发证书\n# 在master节点同意颁发证书\nkubectl certificate approve node-csr-bPGome_z3ZBCFpug_FyVVoOXCYFuID6MmCO5ymtDQpQ\n```\n\n5）node1节点启动proxy服务\n\n```shell\ncat proxy.sh\n#!/bin/bash\nNODE_ADDRESS=$1\nDNS_SERVER_IP=${2:-\"10.0.0.2\"}\n\ncat <<EOF >/opt/kubernetes/cfg/kubelet\nKUBELET_OPTS=\"--logtostderr=true \\\\\n--v=4 \\\\\n--hostname-override=${NODE_ADDRESS} \\\\\n--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\\\\n--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\\\\n--config=/opt/kubernetes/cfg/kubelet.config \\\\\n--cert-dir=/opt/kubernetes/ssl \\\\\n--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0\"\nEOF\n\ncat <<EOF >/opt/kubernetes/cfg/kubelet.config\nkind: KubeletConfiguration\napiVersion: kubelet.config.k8s.io/v1beta1\naddress: ${NODE_ADDRESS}\nport: 10250\nreadOnlyPort: 10255\ncgroupDriver: cgroupfs\nclusterDNS:\n- ${DNS_SERVER_IP} \nclusterDomain: cluster.local.\nfailSwapOn: false\nauthentication:\n  anonymous:\n    enabled: true\nEOF\n\ncat <<EOF >/usr/lib/systemd/system/kubelet.service\n[Unit]\nDescription=Kubernetes Kubelet\nAfter=docker.service\nRequires=docker.service\n\n[Service]\nEnvironmentFile=/opt/kubernetes/cfg/kubelet\nExecStart=/opt/kubernetes/bin/kubelet \\$KUBELET_OPTS\nRestart=on-failure\nKillMode=process\n\n[Install]\nWantedBy=multi-user.target\nEOF\n\nsystemctl daemon-reload\nsystemctl enable kubelet\nsystemctl restart kubelet\n```\n\n```shell\n./proxy.sh 192.168.52.212\n```\n\n6）node2节点上重复上述步骤\n\n7）master节点上看到集群内的节点全部为ready完成部署\n\n```shell\nkubectl get node\nNAME             STATUS   ROLES    AGE   VERSION\n192.168.52.212   Ready    <none>   26m   v1.12.3\n192.168.52.213   Ready    <none>   24s   v1.12.3\n```\n\n文章转载自：[https://blog.csdn.net/weixin_45682995/article/details/105850999](https://blog.csdn.net/weixin_45682995/article/details/105850999)","categoryId":10,"viewCount":646,"categoryName":"Kubernetes","author":"球接子","authorAvatar":null,"tagIds":[16],"tagNames":["Kubernetes"]}}