准备环境
使用vagrant来生成三台虚拟机,虚拟机基本信息如下:
hostname | ip | 操作系统 |
---|---|---|
kube-master | 192.168.33.10 | centos/7 |
kube-node-1 | 192.168.33.11 | centos/7 |
kube-node-2 | 192.168.33.12 | centos/7 |
Vagrantfile配置内容如下:
# -*- mode: ruby -*-
# vi: set ft=ruby :
# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure("2") do |config|
# The most common configuration options are documented and commented below.
# For a complete reference, please see the online documentation at
# https://docs.vagrantup.com.
# Every Vagrant development environment requires a box. You can search for
# boxes at https://vagrantcloud.com/search.
config.vm.box = "centos/7"
config.vm.hostname = "k8s-master"
# Disable automatic box update checking. If you disable this, then
# boxes will only be checked for updates when the user runs
# `vagrant box outdated`. This is not recommended.
# config.vm.box_check_update = false
# Create a forwarded port mapping which allows access to a specific port
# within the machine from a port on the host machine. In the example below,
# accessing "localhost:8080" will access port 80 on the guest machine.
# NOTE: This will enable public access to the opened port
config.vm.network "forwarded_port", guest: 8001, host: 8001
# Create a forwarded port mapping which allows access to a specific port
# within the machine from a port on the host machine and only allow access
# via 127.0.0.1 to disable public access
# config.vm.network "forwarded_port", guest: 80, host: 8080, host_ip: "127.0.0.1"
# Create a private network, which allows host-only access to the machine
# using a specific IP.
config.vm.network "private_network", ip: "192.168.33.10"
# Create a public network, which generally matched to bridged network.
# Bridged networks make the machine appear as another physical device on
# your network.
# config.vm.network "public_network"
# Share an additional folder to the guest VM. The first argument is
# the path on the host to the actual folder. The second argument is
# the path on the guest to mount the folder. And the optional third
# argument is a set of non-required options.
config.vm.synced_folder "../data", "/vagrant_data"
# Provider-specific configuration so you can fine-tune various
# backing providers for Vagrant. These expose provider-specific options.
# Example for VirtualBox:
#
config.vm.provider "virtualbox" do |vb|
# Display the VirtualBox GUI when booting the machine
vb.gui = false
# Customize the amount of memory on the VM:
vb.memory = "2048"
vb.cpus = 2
end
#
# View the documentation for the provider you are using for more
# information on available options.
# Enable provisioning with a shell script. Additional provisioners such as
# Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the
# documentation for more information about their specific syntax and use.
# config.vm.provision "shell", inline: <<-SHELL
# apt-get update
# apt-get install -y apache2
# SHELL
end
以上只是kube-master的配置cpus不小于2,其他人配置根据需要修改,如hostname,vb. memory,private_network
安装
各个节点都需要的操作
-
关闭防火墙selinux
# systemctl stop firewalld # systemctl disable firewalld # Set SELinux in permissive mode (effectively disabling it) setenforce 0 sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
-
关闭swap交换区功能,实际不用也行,但是如果不关闭的话,kubeadm等会报错虽然也可以通过命令行加参数不让报错,但没有直接关闭来的方便
临时关闭,可以使用命令
swapoff -a
永远关闭,直接编辑
/etc/fstab
, 将swapfile那行注释掉 -
安装docker
yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo yum install -y docker-ce docker-ce-cli containerd.io systemctl enable docker --now
-
网络增强
创建/etc/sysctl.d/k8s.conf文件,添加如下内容:
net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1
执行如下命令使修改生效:
$ modprobe br_netfilter $ sysctl -p /etc/sysctl.d/k8s.conf
-
安装有kubeadm
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes systemctl enable --now kubelet
注:因为使用了vagrant来搭建k8s集群,这里还需要修改/etc/sysconfig/kubelet,
KUBELET_EXTRA_ARGS="--node-ip=<node_ip>"
,否则会报以下错误
-
准备kubeadm init 过程中使用的镜像
由于k8s.gcr.io被墙的原因,镜像没有直接pull到, 需要先从别的地方(registry.cn-hangzhou.aliyuncs.com)拉到到镜像,再tag成需要的镜像, 以下是我要部署的镜像;
REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-proxy v1.17.0 7d54289267dc 7 days ago 116MB registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy v1.17.0 7d54289267dc 7 days ago 116MB k8s.gcr.io/kube-controller-manager v1.17.0 5eb3b7486872 7 days ago 161MB registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager v1.17.0 5eb3b7486872 7 days ago 161MB k8s.gcr.io/kube-apiserver v1.17.0 0cae8d5cc64c 7 days ago 171MB registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver v1.17.0 0cae8d5cc64c 7 days ago 171MB k8s.gcr.io/kube-scheduler v1.17.0 78c190f736b1 7 days ago 94.4MB registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler v1.17.0 78c190f736b1 7 days ago 94.4MB busybox latest b534869c81f0 11 days ago 1.22MB kubernetesui/dashboard v2.0.0-beta6 84cd817d07fb 4 weeks ago 91.7MB k8s.gcr.io/coredns 1.6.5 70f311871ae1 5 weeks ago 41.6MB registry.cn-hangzhou.aliyuncs.com/google_containers/coredns 1.6.5 70f311871ae1 5 weeks ago 41.6MB k8s.gcr.io/etcd 3.4.3-0 303ce5db0e90 7 weeks ago 288MB registry.cn-hangzhou.aliyuncs.com/google_containers/etcd 3.4.3-0 303ce5db0e90 7 weeks ago 288MB kubernetesui/metrics-scraper v1.0.1 709901356c11 5 months ago 40.1MB quay.io/coreos/flannel v0.11.0-amd64 ff281650a721 10 months ago 52.6MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 24 months ago 742kB registry.cn-hangzhou.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 24 months ago 742kB
主节点需要的镜像有:kube-proxy,kube-controller-manager,kube-apiserver,kube-scheduler,coredns,etcd,pause
工作节点需要的镜像有:kube-proxy,pause
Master节点操作
- 执行kubeadm init
kubeadm init --apiserver-advertise-address=192.168.33.10 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16
具体请参考官网,我只设置了apiserver-advertise-address,pod-network-cidr
[init] Using Kubernetes version: vX.Y.Z
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kubeadm-cp localhost] and IPs [10.138.0.4 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kubeadm-cp localhost] and IPs [10.138.0.4 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubeadm-cp kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.138.0.4]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 31.501735 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-X.Y" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kubeadm-cp" as an annotation
[mark-control-plane] Marking the node kubeadm-cp as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node kubeadm-cp as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: <token>
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join <control-plane-host>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>
记得执行以下命令
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
- 安装pod 网络插件,我选择了flannel
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
节点操作
-
在master节点执行kubeadm token list获取token
-
获取hash
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \ openssl dgst -sha256 -hex | sed 's/^.* //'
-
执行kubeadm join命令
kubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>

安装dasboard-admin
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta6/aio/deploy/recommended.yaml
kubectl proxy -address=0.0.0.0
创建集群管理账号被绑定cluster-admin
apiVersion: v1
kind: ServiceAccount
metadata:
name: dashboard-admin
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: dashboard-admin
subjects:
- kind: ServiceAccount
name: dashboard-admin
namespace: kube-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
获取token,登录
