安装部署Kubernetes/K8S前提条件:
- 准备好系统环境(需linux内核版本较高);
- 所有master和node主机需要安装docker、kubeadm、kubectl和kubelet程序并启动服务;
- master主机执行kubeadm初始化操作,其余节点执行加入操作即可构成集群;
主机名 | 主机IP | 节点角色 | 安装软件 |
master01 | 192.168.239.130 | master | Docker,kubelet,kubeadm,kubectl |
node01 | 192.168.239.131 | master | Docker,kubelet,kubeadm,kubectl |
node02 | 192.168.239.132 | node | Docker,kubelet,kubeadm |
node03 | 192.168.239.133 | node | Docker,kubelet,kubeadm |
一、主机环境预设
1.各节点配置时间同步
通过安装chrony与网络时间同步(如系统预装,可以直接启动即可)
[root@master1 ~]# yum install chrony -y ~]# systemctl start chronyd ~]# systemctl restart chronyd # 重启chrony可以让时间重新同步
2.各节点主机名称解析
编辑/etc/hosts文件,添加主机名解析记录(出于简化配置步骤的目的,本测试环境使用hosts文件进行各节点名称解析)文件内容如下所示:
cat <<EOF >> /etc/hosts
192.168.239.130 master01.ilinux.io master01
192.168.239.131 node01.ilinux.io node01
192.168.239.132 node02.ilinux.io node02
192.168.239.133 node03.ilinux.io node03
EOF
(可选配置)免密登录配置
]# ssh-keygen ]# for myi in node01 node02 node03;do ssh-copy-id root@$myi; done
3.各节点关闭防火墙
关闭iptables或ufw或firewalld相关服务
[root@master1 ~]# systemctl stop firewalld [root@master1 ~]# systemctl disable firewalld Ubuntu系统: root@master2:~# ufw disable Firewall stopped and disabled on system startup
4.各节点禁用SELinx
若当前启用了SELinux,则需要编辑/etc/sysconfig/selinux文件,禁用SELinux,并临时设置其当前状态为permissive:
[root@master1 ~]# sed -i 's@^\(SELINUX=\).*@\1disabled@' /etc/sysconfig/selinux [root@master1 ~]# setenforce 0
查看selinux状态:
[root@master01 ~]# getenforce Permissive
(也可以手动编辑/etc/sysconfig/selinux的SELINUX=enforcing选项,使SELINUX的值为disabled)
5.禁用Swap设备;
部署集群时,kubeadm默认会预先检查当前主机是否禁用了Swap设备,并在未禁用时强制终止部署过程。因此,在主机内存资源充裕的条件下,需要禁用所有的Swap设备,否则,就需要在后文的kubeadm init及kubeadm join命令执行时额外使用相关的选项忽略检查错误。
1.关闭swap
[root@master01 ~]# swapoff -a
2.在/etc/fstab中注释掉swap挂载记录行
6.启用ipvs内核模块
创建内核模块载入相关的脚本文件/etc/sysconfig/modules/ipvs.modules,设定自动载入的内核模块。文件内容如下:
#!/bin/bash ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs" for mod in $(ls $ipvs_mods_dir | grep -o "^[^.]*"); do /sbin/modinfo -F filename $mod &> /dev/null if [ $? -eq 0 ]; then /sbin/modprobe $mod fi done
修改文件权限,并手动为当前系统加载内核模块:
]# chmod +x /etc/sysconfig/modules/ipvs.modules ]# bash /etc/sysconfig/modules/ipvs.modules
7.配置bridge-nf-call-iptables
设置/proc/sys/net/bridge/bridge-nf-call-iptables和/proc/sys/net/bridge/bridge-nf-call-ip6tables文件内容为1,创建/etc/sysctl.d/k8s.conf文件,并添加如下内容:
cat <<EOF > /etc/sysctl.d/k8s.conf
vm.swappiness = 0 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1
EOF
# 使配置生效
]# sysctl -p /etc/sysctl.d/k8s.conf
二、安装应用程序
所有主机需执行相同操作
1.安装docker
可参考此文章安装docker:Docker安装部署方法总结【二】
2.Docker配置
1.各节点安装并启用Docker容器运行时,配置好容器加速服务;
- 建议使用阿里云容器加速;
- 需要制定cgroup驱动为systemd;
]# mkdir -p /etc/docker
]# tee /etc/docker/daemon.json <<- 'EOF' { "exec-opts":["native.cgroupdriver=systemd"], "log-driver":"json-file", "log-opts":{ "max-size":"100m" }, "storage-driver":"overlay2", "registry-mirrors": ["https://ubylzbxo.mirror.aliyuncs.com","https://docker.mirrors.ustc.edu.cn","https://registry.docker-cn.com"] } EOF
2.docker自1.13版起会自动设置iptables的FORWARD默认策略为DROP,这可能会影响Kubernetes集群依赖的报文转发功能,因此,需要在docker服务启动后,重新将FORWARD链的默认策略设备为ACCEPT,方式是修改/usr/lib/systemd/system/docker.service文件,在“ExecStart=/usr/bin/dockerd”一行之后新增一行如下内容:
ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT
3.配置生效
配置完docker配置文件,需要daemon-reload和重启docker生效,可以用docker info命令查看
[root@master1 ~]# systemctl daemon-reload [root@master1 ~]# systemctl restart docker
3.安装kubeadm、kubectl和kubelet;
由于kubenetes使用Google主导研发,所有程序镜像包大陆无法访问(如gcr.io),只能通过内陆的某些比如阿里巴巴和清华大学镜像来获取。阿里巴巴k8s镜像访问点击这里查看,根据提示进行配置,然后进行安装。
参考资料:
1.For Debian/Ubuntu
apt-get update && apt-get install -y apt-transport-https curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - cat <<EOF >/etc/apt/sources.list.d/kubernetes.list deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main EOF apt-get update apt-get install -y kubelet kubeadm kubectl # 默认安装当前最新版本 root@master2:/etc/docker# apt-get install -y kubectl=1.18.1-00 kubeadm=1.18.1-00 kubelet=1.18.1-00 # 安装指定版本程序
2.For CentOS / RHEL / Fedora
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
安装所相关程序包:
]# yum install -y kubelet kubeadm kubectl # 默认安装最新版本软件 ]# yum install -y kubectl-1.18.1 kubeadm-1.18.1 kubelet-1.18.1 # 安装指定版本的软件包
查看支持的版本信息:
yum list kubelet kubeadm kubectl --showduplicates|sort -r
ps: 由于官网未开放同步方式, 可能会有索引gpg检查失败的情况, 这时请用 yum install -y --nogpgcheck kubelet kubeadm kubectl
安装
4.各几点设置开机自启动
~]# systemctl enable docker kubelet
(可选配置)脚本启动
脚本形式 vi restart.sh
#!/bin/bash ser=("etcd" "docker" "kube-apiserver" "kube-controller-manager" "kube-scheduler" "kubelet" "kube-proxy") for i in ${ser[@]};do systemctl restart $i systemctl enable $i systemctl status $i done
三、初始化控制平面
1..初始化命令选项查看帮助
[root@master1 lib]# kubeadm init --help Run this command in order to set up the Kubernetes control plane The "init" command executes the following phases: ``` preflight Run pre-flight checks kubelet-start Write kubelet settings and (re)start the kubelet certs Certificate generation /ca Generate the self-signed Kubernetes CA to provision identities for other Kubernetes components /apiserver Generate the certificate for serving the Kubernetes API /apiserver-kubelet-client Generate the certificate for the API server to connect to kubelet /front-proxy-ca Generate the self-signed CA to provision identities for front proxy /front-proxy-client Generate the certificate for the front proxy client /etcd-ca Generate the self-signed CA to provision identities for etcd /etcd-server Generate the certificate for serving etcd /etcd-peer Generate the certificate for etcd nodes to communicate with each other /etcd-healthcheck-client Generate the certificate for liveness probes to healthcheck etcd /apiserver-etcd-client Generate the certificate the apiserver uses to access etcd /sa Generate a private key for signing service account tokens along with its public key kubeconfig Generate all kubeconfig files necessary to establish the control plane and the admin kubeconfig file /admin Generate a kubeconfig file for the admin to use and for kubeadm itself /kubelet Generate a kubeconfig file for the kubelet to use *only* for cluster bootstrapping purposes /controller-manager Generate a kubeconfig file for the controller manager to use /scheduler Generate a kubeconfig file for the scheduler to use control-plane Generate all static Pod manifest files necessary to establish the control plane /apiserver Generates the kube-apiserver static Pod manifest /controller-manager Generates the kube-controller-manager static Pod manifest /scheduler Generates the kube-scheduler static Pod manifest etcd Generate static Pod manifest file for local etcd /local Generate the static Pod manifest file for a local, single-node local etcd instance upload-config Upload the kubeadm and kubelet configuration to a ConfigMap /kubeadm Upload the kubeadm ClusterConfiguration to a ConfigMap /kubelet Upload the kubelet component config to a ConfigMap upload-certs Upload certificates to kubeadm-certs mark-control-plane Mark a node as a control-plane bootstrap-token Generates bootstrap tokens used to join a node to a cluster kubelet-finalize Updates settings relevant to the kubelet after TLS bootstrap /experimental-cert-rotation Enable kubelet client certificate rotation addon Install required addons for passing Conformance tests /coredns Install the CoreDNS addon to a Kubernetes cluster /kube-proxy Install the kube-proxy addon to a Kubernetes cluster ``` Usage: kubeadm init [flags] kubeadm init [command] Available Commands: phase Use this command to invoke single phase of the init workflow Flags: --apiserver-advertise-address string The IP address the API Server will advertise it's listening on. If not set the default network interface will be used. --apiserver-bind-port int32 Port for the API Server to bind to. (default 6443) --apiserver-cert-extra-sans strings Optional extra Subject Alternative Names (SANs) to use for the API Server serving certificate. Can be both IP addresses and DNS names. --cert-dir string The path where to save and store the certificates. (default "/etc/kubernetes/pki") --certificate-key string Key used to encrypt the control-plane certificates in the kubeadm-certs Secret. --config string Path to a kubeadm configuration file. --control-plane-endpoint string Specify a stable IP address or DNS name for the control plane. --cri-socket string Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket. --dry-run Don't apply any changes; just output what would be done. -k, --experimental-kustomize string The path where kustomize patches for static pod manifests are stored. --feature-gates string A set of key=value pairs that describe feature gates for various features. Options are: IPv6DualStack=true|false (ALPHA - default=false) PublicKeysECDSA=true|false (ALPHA - default=false) -h, --help help for init --ignore-preflight-errors strings A list of checks whose errors will be shown as warnings. Example: 'IsPrivilegedUser,Swap'. Value 'all' ignores errors from all checks. --image-repository string Choose a container registry to pull control plane images from (default "k8s.gcr.io") --kubernetes-version string Choose a specific Kubernetes version for the control plane. (default "stable-1") --node-name string Specify the node name. --pod-network-cidr string Specify range of IP addresses for the pod network. If set, the control plane will automatically allocate CIDRs for every node. --service-cidr string Use alternative range of IP address for service VIPs. (default "10.96.0.0/12") --service-dns-domain string Use alternative domain for services, e.g. "myorg.internal". (default "cluster.local") --skip-certificate-key-print Don't print the key used to encrypt the control-plane certificates. --skip-phases strings List of phases to be skipped --skip-token-print Skip printing of the default bootstrap token generated by 'kubeadm init'. --token string The token to use for establishing bidirectional trust between nodes and control-plane nodes. The format is [a-z0-9]{6}\.[a-z0-9]{16} - e.g. abcdef.0123456789abcdef --token-ttl duration The duration before the token is automatically deleted (e.g. 1s, 2m, 3h). If set to '0', the token will never expire (default 24h0m0s) --upload-certs Upload control-plane certificates to the kubeadm-certs Secret. Global Flags: --add-dir-header If true, adds the file directory to the header --log-file string If non-empty, use this log file --log-file-max-size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800) --rootfs string [EXPERIMENTAL] The path to the 'real' host root filesystem. --skip-headers If true, avoid header prefixes in the log messages --skip-log-headers If true, avoid headers when opening log files -v, --v Level number for the log level verbosity Use "kubeadm init [command] --help" for more information about a command.
2.开始初始化
方式一:
初始化命令(可根据如下选项释义和自己实际情况进行修改)
[root@master1 ~]# kubeadm init \ --image-repository registry.aliyuncs.com/google_containers \ --kubernetes-version v1.19.2 \ --control-plane-endpoint 192.168.239.130:6443 \ --apiserver-advertise-address 192.168.239.130 \ --pod-network-cidr 10.244.0.0/16 \ --token-ttl 0
选项意义(中文版文档:https://kubernetes.io/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/):
- –image-repository: 指定要使用的镜像仓库;
- –kubernetes-version: kubernetes程序组件的版本号,应该与kubelet的版本号相同
- –control-plane-endpoint: 控制平面固定端点,可以是IP地址或DNS名称,会被用于集群管理员即集群组件的kubeconfig配置文件的API Server的访问地址;单控制平面部署可以不使用该选项;
- –pod-network-cidr: Pod网络的地址范围,其值为CIDR格式的网络地址,flannel网络插件的默认为10.244.0.0/16, callco插件的默认值为192.168.0.0/16;
- –service-cidr: Service的网咯地址范围,其值为CIDR格式的网络地址,默认为10.96.0.0/12;
- –apiserver-advertise-address: apiserver通告给其他组件的IP地址,一般应该为Master节点的用于集群内部通信的IP地址,0.0.0.0表示节点上所有可用地址;
- –token-ttl: 共享令牌(token)的过期时长,默认为24小时,0表示不过期;为防止不安全存储等原因导致的令牌泄露危机到集群安全,建议为其设定过期时长;
方式二:
kubeadm也可通过配置文件加载配置,以定制更丰富的部署选项。以下是个符合前述命令设定方式的使用示例,不过,它明确定义了kubeProxy的模式为ipvs,并支持通过修改imageRepository的值修改获取系统镜像时使用的镜像仓库。
kubeadm init命令完整参考指南请查看官方文档:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/
创建配置文件kubeadm-config.yaml,内容如下(仅供参考):
您暂时无权查看此隐藏内容!
官方参考示例:https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2
kubeadm通过配置文件进行初始化:
]# kubeadm init --config kubeadmin-config.yaml --upload-certs
1.若初始化报错,可以在/etc/sysconfig/kubelet文件中添加参数,忽略报错。
[root@master1 package]# more /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS=
如设置KUBELET_EXTRA_ARGS=”–fail-swap-on=false”可忽略交换空间报错,表示swap出错的时候,不让其出错,相当于忽略。初始化的时候可添加忽略选项–ignore-preflight-errors=Swap
2.若报错如下
May 14 13:44:55 master1 kubelet: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 13:44:55 master1 kubelet: F0514 13:44:55.571137 16841 server.go:199] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
这是因为初始化还没有完成,先关配置文件还没有自动生成,可以忽略,等初始化成功即可。
3.若要重新初始化
kubeadm reset
4.初始化成功会有如下提示(包括安装环境预检、生成证书、私钥、配置文件、pod清单文件、部署addon等过程):
W0514 16:26:38.198975 3142 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [init] Using Kubernetes version: v1.18.1 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local k8s-api.ilinux.io] and IPs [10.96.0.1 192.168.222.150] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [master1 localhost] and IPs [192.168.222.150 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [master1 localhost] and IPs [192.168.222.150 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" W0514 16:28:24.490184 3142 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [control-plane] Creating static Pod manifest for "kube-scheduler" W0514 16:28:24.494827 3142 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 28.512491 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node master1 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: ezl1f1.ha4fc5werojbu359 [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root: kubeadm join k8s-api.ilinux.io:6443 --token ezl1f1.ha4fc5werojbu359 \ --discovery-token-ca-cert-hash sha256:c84d70dd5e0ce8f9305f09955c524a8aac9bb189445b58d026dba78d9981ed43 \ --control-plane Then you can join any number of worker nodes by running the following on each as root: kubeadm join k8s-api.ilinux.io:6443 --token ezl1f1.ha4fc5werojbu359 \ --discovery-token-ca-cert-hash sha256:c84d70dd5e0ce8f9305f09955c524a8aac9bb189445b58d026dba78d9981ed43
3.初始化成功后操作
安装提示执行提示要执行的命令即可:
[root@master1 opt]# mkdir -p $HOME/.kube [root@master1 opt]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@master1 opt]# chown $(id -u):$(id -g) $HOME/.kube/config [root@master1 opt]# ll -h $HOME/.kube/config -rw------- 1 root root 5.4K May 14 16:58 /root/.kube/config
测试kubectl客户端命令,并了解集群组件的当前状态:
]# kubectl get componentstatus
4.node加入集群
在node节点执行命令(集群初始化后生成的join命令,此token有效期为24小时,如失效需要重新获取join命令:~]# kubeadm token create –print-join-command),使其添加到集群中:
[root@node1 system]# kubeadm join k8s-api.ilinux.io:6443 --token ezl1f1.ha4fc5werojbu359 \ > --discovery-token-ca-cert-hash sha256:c84d70dd5e0ce8f9305f09955c524a8aac9bb189445b58d026dba78d9981ed43 W0514 17:42:23.395760 4416 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set. [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
5.master控制平面节点加入集群
新版本的kubeadm在初始化完成之后,已经可以自动生成master节点加入集群命令,不需要执行如下步骤。而对于老版本,1.18.1以下需要执行如下步骤将master节点加入集群。
控制平面中的节点需要共享Kubernetes CA、etcd CA和front proxy CA等的证书和私钥信息,我们可以从第一个控制平面节点之上手动分发相关文件给准备添加的其他master,也可以借助于kubeadm init中的一个特定操作步骤来完成:
1.在第一个控制平面节点执行命令获取certificate key:
[root@master1 package]# kubeadm init phase upload-certs --upload-certs W0623 19:52:39.211426 99823 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace [upload-certs] Using certificate key: 095fea35c70a8610430646d9c2a9876f62d03545c054394ed3ac15cd6a5f16c3
2.在准备作为master节点加入集群的主机执行命令(注意:此命令需参考集群初始化后的提示命令):
kubeadm join k8s-api.ilinux.io:6443 --token ezl1f1.ha4fc5werojbu359 \ --discovery-token-ca-cert-hash sha256:c84d70dd5e0ce8f9305f09955c524a8aac9bb189445b58d026dba78d9981ed43 \ --control-plane --certificate-key 095fea35c70a8610430646d9c2a9876f62d03545c054394ed3ac15cd6a5f16c3
四、安装配置网络插件
Flannel
1.Flannel在Github上的项目
地址: https://github.com/coreos/flannel
2.开始安装部署flannel
可参考如下文章直接下载和使用flannel:
重要提示:为了后续使用k8s的网络规划更加便捷,建议在kube-flannel.yml配置文件添加“Directrouting”: true选项,如下图所示:
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan",
"Directrouting": true
}
}
[root@master1 opt]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
(由于国内网络被qiang,无法直接下载kube-flannel.yml和pull到flannel镜像。本人整理了kube-flannel.yml配置文件,需要的可做参考。)
3.验证flannel
查看Flannel的vxlan的Directrouting如下,表示已生效:
]# ip route show
default via 192.168.239.254 dev ens33 proto static metric 100
10.244.0.0/24 dev cni0 proto kernel scope link src 10.244.0.1
10.244.1.0/24 via 192.168.239.131 dev ens33
10.244.2.0/24 via 192.168.239.132 dev ens33
10.244.3.0/24 via 192.168.239.133 dev ens33
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
192.168.239.0/24 dev ens33 proto kernel scope link src 192.168.239.130 metric 100
查看node节点状态,如下显示Ready表示集群正常,至此K8S已安装部署成功!
]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master01 Ready master 17h v1.19.2
node01 Ready master 17h v1.19.2
node02 Ready <none> 17h v1.19.2
node03 Ready <none> 17h v1.19.2
IPVS
在Pod和service之间通讯时,采用IPVS可以提高通讯效率,配置kube-proxy的configmap文件,修改mode为”ipvs”,如下:
~]# kubectl edit configmap kube-proxy -n kube-system
iptables:
masqueradeAll: false
masqueradeBit: null
minSyncPeriod: 0s
syncPeriod: 0s
ipvs:
excludeCIDRs: null
minSyncPeriod: 0s
scheduler: ""
strictARP: false
syncPeriod: 0s
tcpFinTimeout: 0s
tcpTimeout: 0s
udpTimeout: 0s
kind: KubeProxyConfiguration
metricsBindAddress: ""
mode: "ipvs"
评论前必须登录!
注册