首页
社区
课程
招聘
[原创]学习Kubernetes笔记——kubeadm安装Kubernetes
发表于: 2022-12-31 14:39 18575

[原创]学习Kubernetes笔记——kubeadm安装Kubernetes

2022-12-31 14:39
18575

设置节点名称,时区,安装依赖包,关闭swap、防火墙等。

设置永久主机名称,然后重新登录:

修改每台机器的 /etc/hosts 文件,添加主机名和 IP 的对应关系:

在每台机器上安装依赖包:

在每台机器上关闭防火墙:

如果开启了 swap 分区,kubelet 会启动失败(可以通过将参数 --fail-swap-on 设置为 false 来忽略 swap on),故需要在每台机器上关闭 swap 分区:

为了防止开机自动挂载 swap 分区,可以注释 /etc/fstab 中相应的条目:

查看是否关闭成功

关闭 SELinux,否则后续 K8S 挂载目录时可能报错 Permission denied

linux 系统开启了 dnsmasq 后(如 GUI 环境),将系统 DNS Server 设置为 127.0.0.1,这会导致 docker 容器无法解析域名,需要关闭它:

如果没有特殊指明,本文档的所有操作均在 m1 节点上执行,然后远程分发文件和执行命令。

设置 m1 可以无密码登录所有节点的 root 账户:

这样,有多台节点的时候,需要批量操作,就可以这样了。

必须启用centos-extras仓库。这个存储库默认是启用的,但如果已经禁用它,需要重新启用它

为 Docker 提供了一个 shim,这样就可以通过 Kubernetes Container Runtime Interface(kubernetes容器运行时接口) 控制 Docker 了。

kubernetes 1.24+版本之后,docker必须要加装cir-docker

首先,去官方下载相应系统的版本
https://go.dev/doc/install

首先删除以前的安装的 Go,然后将刚刚下载的存档解压缩到/usr/local,在 /usr/local/go中创建一个新的:

配置环境变量:

/etc/profile$HOME/.profile下增加:

然后应用

最后执行

确认安装成功。

编译

1、需要追加--network-plugin=cni,通过该配置告诉容器,使用kubernetes的网络接口。

2、覆盖沙盒 (pause) 镜像,正常情况下,国内你是拉取不到k8s.gcr.io/pause:3.8镜像的,可以换成国内的kubebiz/pause:3.8,这个镜像是一切的 Pod 的基础,要么自己手动导入进来,要么改成国内的镜像,通过设置以下配置来覆盖默认的沙盒镜像:

编辑:

将这1、2个步骤的参数,在ExecStart后面追加,如:

确认安装成功。

一般来讲,硬件设备会拥有唯一的地址,但是有些虚拟机的地址可能会重复。Kubernetes 使用这些值来唯一确定集群中的节点,如果这些值在每个节点上不唯一,可能会导致安装失败。

如果有一个以上的网络适配器,同时 Kubernetes 组件通过默认路由不可达,建议预先添加 IP 路由规则,这样 Kubernetes 集群就可以通过对应的适配器完成连接。

确保 br_netfilter 模块被加载。这一操作可以通过运行 lsmod | grep br_netfilter 来完成。若要显式加载该模块,可执行 sudo modprobe br_netfilter

为了让Linux 节点上的 iptables 能够正确地查看桥接流量,需要确保在 sysctl 配置中将 net.bridge.bridge-nf-call-iptables 设置为 1。例如:

需要在每台机器上安装以下的软件包:

确保这3个组件的版本一致,如果不一致,可能会导致一些预料之外的错误和问题。

ps: 由于官网未开放同步方式, 可能会有索引gpg检查失败的情况, 这时用 yum install -y --nogpgcheck kubelet kubeadm kubectl 安装

这3个ip中

除了第一个节点 ip,其他的2个可以不指定。(建议指定,此为常用做法)

另外:控制平面其实指的就是master节点,由于后面kubernetes改名了,英文是control-plane,译过来就是控制平面了。

国内改为阿里镜像

输出:

这里可能会报错,报错信息:

原因:没有整合kubelet和cri-dockerd

解决办法:解决办法,在命令后面加上--cri-socket unix:///var/run/cri-dockerd.sock

执行下面的命令,给用户增加kubectl配置,下面的命令也是 kubeadm init 输出的一部分:

或者,如果是 root 用户,则可以运行:

ps:如果是单节点,执行解除,否则网络组件因为没有工作节点可能会失败

默认情况下,出于安全原因,集群不会在Master节点上调度 Pod。 如果希望能够在Master节点上调度Pod,运行

这将从任何拥有 node-role.kubernetes.io/master taint 标记的节点中移除该标记, 包括控制平面节点,这意味着调度程序将能够在任何地方调度 Pods。

从上一步 kubeadm init 输出结果中获取以下信息,然后执行。

如果没有--token,可以通过在控制节点上运行以下命令来获取令牌:

默认情况下,令牌会在24小时后过期。如果要在当前令牌过期后将节点加入集群,则可以通过在控制节点上运行以下命令来创建新token:

如果没有 --discovery-token-ca-cert-hash 的值,则可以通过在控制节点上执行以下命令链来获取它:

当以上操作全部完成之后,可执行

确认所有组件都是Running状态。

然后输入:

确认所有节点节点为Ready

Dashboard 是基于网页的 Kubernetes 用户界面。 可以使用 Dashboard 将容器应用部署到 Kubernetes 集群中,也可以对容器应用排错,还能管理集群资源。 可以使用 Dashboard 获取运行在集群中的应用的概览信息,也可以创建或者修改 Kubernetes 资源 (如 Deployment,Job,DaemonSet 等等)。

可以通过以下命令部署:

but,国内很可能访问不了,可以换成kubebiz的源:

也可以自己创建recommended.yaml,然后部署

为了保护集群数据安全,默认情况下,Dashboard 会使用最少的 RBAC 配置进行部署。 当前,Dashboard 仅支持使用 Bearer 令牌登录,参考一下指南,创建用户来登录:

使用Kubernetes的Service Account创建一个新用户,授予该用户admin权限,并使用与该用户绑定的Token来登录Dashboard。

重要提示:在继续之前,请确保你知道自己在做什么。给Dashboard的服务账户授予admin权限可能会有安全风险。

首先在命名空间 kubernetes-dashboard 中创建名称为 admin-user 的 Service Account。

在大多数情况下,使用 kops、kubeadm 或任何其他流行工具部署集群后,集群中已经存在 ClusterRole cluster-admin。 可以使用它并仅为ServiceAccount 创建一个 ClusterRoleBinding。 如果它不存在,那么需要先创建此角色并手动授予所需的权限。

需要创建以上2个文件ServiceAccountClusterRoleBinding,可以把它们复制到新的清单文件中,命名为dashboard-adminuser.yaml,并使用kubectl apply -f dashboard-adminuser.yaml来创建它们。

现在需要找到可以用来登录的token,执行以下命令:

现在,就可以通过Token登录了。

移除 admin 的 ServiceAccountClusterRoleBinding

可以使用 kubectl 命令行工具来启用 Dashboard 访问,命令如下:

在本机浏览器输入:

ps:注意必须是 http

监听所有IP地址并将8080转发至https 443端口访问。

不要中断进程,通过物理节点IP8080,(注意必须是https)即可访问:

创建一个类型为NodePort的SVC:

创建成功后,通过物理节点IP30443,(注意必须是https)即可访问:

所有的节点都需要安装nfs工具

在部署NFS服务器时已配置好了共享文件夹,可在节点上使用showmount查看一下可共享的文件夹有哪些

Kubernetes 不包含内部 NFS 驱动。所以需要使用外部驱动和创建StorageClass

官方默认的:

国内镜像:

创建nfs-all.yaml

将默认的10.3.243.101/data/nfs改成自己的nfs地址和存储路径,这里我已经改成了10.10.30.211/k8s

执行yaml

或官方的:

输出

如果不是Running,通过logs获取详细报错原因:

获取storageClassName的名称:

返回:

sc的名称是nfs-storage

Kubernetes 官方文档

Kubernetes(k8s)中文教程

hostnamectl set-hostname m1   #master节点
hostnamectl set-hostname n1   #node节点
hostnamectl set-hostname n2   #node节点
hostnamectl set-hostname m1   #master节点
hostnamectl set-hostname n1   #node节点
hostnamectl set-hostname n2   #node节点
vi /etc/hosts
10.10.30.201 m1
10.10.30.202 n1
10.10.30.203 n2
10.10.30.211 nfs   #配置后可以使用nfs指定NFS服务器地址
vi /etc/hosts
10.10.30.201 m1
10.10.30.202 n1
10.10.30.203 n2
10.10.30.211 nfs   #配置后可以使用nfs指定NFS服务器地址
# 调整系统 TimeZone
sudo timedatectl set-timezone Asia/Shanghai
 
# 将当前的 UTC 时间写入硬件时钟
sudo timedatectl set-local-rtc 0
 
# 重启依赖于系统时间的服务
sudo systemctl restart rsyslog
sudo systemctl restart crond
# 调整系统 TimeZone
sudo timedatectl set-timezone Asia/Shanghai
 
# 将当前的 UTC 时间写入硬件时钟
sudo timedatectl set-local-rtc 0
 
# 重启依赖于系统时间的服务
sudo systemctl restart rsyslog
sudo systemctl restart crond
sudo yum install -y epel-release
sudo yum install -y conntrack ipvsadm ipset jq sysstat curl iptables libseccomp
sudo yum install -y epel-release
sudo yum install -y conntrack ipvsadm ipset jq sysstat curl iptables libseccomp
sudo systemctl stop firewalld
sudo systemctl disable firewalld
sudo iptables -F && sudo iptables -X && sudo iptables -F -t nat && sudo iptables -X -t nat
sudo iptables -P FORWARD ACCEPT
sudo systemctl stop firewalld
sudo systemctl disable firewalld
sudo iptables -F && sudo iptables -X && sudo iptables -F -t nat && sudo iptables -X -t nat
sudo iptables -P FORWARD ACCEPT
sudo swapoff -a
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
free -m
free -m
sudo setenforce 0
grep SELINUX /etc/selinux/config
SELINUX=disabled
sudo setenforce 0
grep SELINUX /etc/selinux/config
SELINUX=disabled
sudo service dnsmasq stop
sudo systemctl disable dnsmasq
sudo service dnsmasq stop
sudo systemctl disable dnsmasq
cat > kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
EOF
 
 
sudo cp kubernetes.conf  /etc/sysctl.d/kubernetes.conf
sudo sysctl -p /etc/sysctl.d/kubernetes.conf
sudo mount -t cgroup -o cpu,cpuacct none /sys/fs/cgroup/cpu,cpuacct
cat > kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
EOF
 
 
sudo cp kubernetes.conf  /etc/sysctl.d/kubernetes.conf
sudo sysctl -p /etc/sysctl.d/kubernetes.conf
sudo mount -t cgroup -o cpu,cpuacct none /sys/fs/cgroup/cpu,cpuacct
sudo modprobe br_netfilter
sudo modprobe ip_vs
sudo modprobe br_netfilter
sudo modprobe ip_vs
 
ssh-keygen -t rsa
ssh-copy-id root@m1
ssh-copy-id root@n1
ssh-copy-id root@n2
ssh-keygen -t rsa
ssh-copy-id root@m1
ssh-copy-id root@n1
ssh-copy-id root@n2
export NODE_IPS=(m1 m2 m3 n1 n2)
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp /k8s/cri/containerd-1.6.8-linux-amd64.tar.gz root@${node_ip}:/k8s/cri
    ssh root@${node_ip} "tar Cxzvf /usr/local /k8s/cri/containerd-1.6.8-linux-amd64.tar.gz"
  done
export NODE_IPS=(m1 m2 m3 n1 n2)
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp /k8s/cri/containerd-1.6.8-linux-amd64.tar.gz root@${node_ip}:/k8s/cri
    ssh root@${node_ip} "tar Cxzvf /usr/local /k8s/cri/containerd-1.6.8-linux-amd64.tar.gz"
  done
sudo yum install -y yum-utils
 
sudo yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
sudo yum install -y yum-utils
 
sudo yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
sudo yum -y install docker-ce docker-ce-cli containerd.io docker-compose-plugin
sudo yum -y install docker-ce docker-ce-cli containerd.io docker-compose-plugin
sudo systemctl daemon-reload
sudo systemctl start docker
sudo systemctl daemon-reload
sudo systemctl start docker
sudo systemctl status docker
sudo systemctl status docker
 
 
rm -rf /usr/local/go && tar -C /usr/local -xzf go1.19.4.linux-amd64.tar.gz
rm -rf /usr/local/go && tar -C /usr/local -xzf go1.19.4.linux-amd64.tar.gz
 
export PATH=$PATH:/usr/local/go/bin
export PATH=$PATH:/usr/local/go/bin
source /etc/profile
source /etc/profile
go version
go version
sudo yum -y install git
git clone https://github.com/Mirantis/cri-dockerd.git
sudo yum -y install git
git clone https://github.com/Mirantis/cri-dockerd.git
cd cri-dockerd
mkdir bin
go build -o bin/cri-dockerd
mkdir -p /usr/local/bin
install -o root -g root -m 0755 bin/cri-dockerd /usr/local/bin/cri-dockerd
cp -a packaging/systemd/* /etc/systemd/system
sed -i -e 's,/usr/bin/cri-dockerd,/usr/local/bin/cri-dockerd,' /etc/systemd/system/cri-docker.service
cd cri-dockerd
mkdir bin
go build -o bin/cri-dockerd
mkdir -p /usr/local/bin
install -o root -g root -m 0755 bin/cri-dockerd /usr/local/bin/cri-dockerd
cp -a packaging/systemd/* /etc/systemd/system
sed -i -e 's,/usr/bin/cri-dockerd,/usr/local/bin/cri-dockerd,' /etc/systemd/system/cri-docker.service
 
 
vim /etc/systemd/system/cri-docker.service
vim /etc/systemd/system/cri-docker.service
ExecStart=/usr/local/bin/cri-dockerd --container-runtime-endpoint fd:// --network-plugin=cni --pod-infra-container-image=kubebiz/pause:3.8
ExecStart=/usr/local/bin/cri-dockerd --container-runtime-endpoint fd:// --network-plugin=cni --pod-infra-container-image=kubebiz/pause:3.8
systemctl daemon-reload
systemctl enable cri-docker.service
systemctl enable --now cri-docker.socket
systemctl start cri-docker
systemctl daemon-reload
systemctl enable cri-docker.service
systemctl enable --now cri-docker.socket
systemctl start cri-docker
systemctl status cri-docker
systemctl status cri-docker
 
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
 
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
 
sudo sysctl --system
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
 
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
 
sudo sysctl --system
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
 
setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
 
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable kubelet && systemctl start kubelet
 
## 另外,也可以指定版本安装
## yum install kubectl-1.21.3-0.x86_64 kubeadm-1.21.3-0.x86_64 kubelet-1.21.3-0.x86_64
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
 
setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
 
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable kubelet && systemctl start kubelet
 
## 另外,也可以指定版本安装
## yum install kubectl-1.21.3-0.x86_64 kubeadm-1.21.3-0.x86_64 kubelet-1.21.3-0.x86_64
kubeadm init --apiserver-advertise-address=10.10.30.201 \
             --pod-network-cidr=192.168.0.0/16 \
             --service-cidr=10.96.0.0/12 \
             --image-repository registry.aliyuncs.com/google_containers \
             --cri-socket unix:///var/run/cri-dockerd.sock    #若报错没有指定CRI,需指定CRI
kubeadm init --apiserver-advertise-address=10.10.30.201 \
             --pod-network-cidr=192.168.0.0/16 \
             --service-cidr=10.96.0.0/12 \
             --image-repository registry.aliyuncs.com/google_containers \
             --cri-socket unix:///var/run/cri-dockerd.sock    #若报错没有指定CRI,需指定CRI
[init] Using Kubernetes version: v1.26.0
[preflight] Running pre-flight checks
    [WARNING FileExisting-tc]: tc not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local m1] and IPs [10.96.0.1 10.10.30.201]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost m1] and IPs [10.10.30.201 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost m1] and IPs [10.10.30.201 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 17.502216 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node m1 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node m1 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 18g6j5.k9a5tja9qn5ko7yw
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
 
Your Kubernetes control-plane has initialized successfully!
 
To start using your cluster, you need to run the following as a regular user:
 
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
 
Alternatively, if you are the root user, you can run:
 
  export KUBECONFIG=/etc/kubernetes/admin.conf
 
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
 
Then you can join any number of worker nodes by running the following on each as root:
 
kubeadm join 10.10.30.201:6443 --token 18g6j5.k9a5tja9qn5ko7yw \
    --discovery-token-ca-cert-hash sha256:f7d9d6d7fb0f79600caee4adb2bb4ebecd543f71b533f22aadd4c88638417a63
[init] Using Kubernetes version: v1.26.0
[preflight] Running pre-flight checks
    [WARNING FileExisting-tc]: tc not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local m1] and IPs [10.96.0.1 10.10.30.201]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost m1] and IPs [10.10.30.201 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost m1] and IPs [10.10.30.201 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 17.502216 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node m1 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node m1 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 18g6j5.k9a5tja9qn5ko7yw
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
 
Your Kubernetes control-plane has initialized successfully!
 
To start using your cluster, you need to run the following as a regular user:
 
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
 
Alternatively, if you are the root user, you can run:
 
  export KUBECONFIG=/etc/kubernetes/admin.conf
 
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
 
Then you can join any number of worker nodes by running the following on each as root:
 
kubeadm join 10.10.30.201:6443 --token 18g6j5.k9a5tja9qn5ko7yw \
    --discovery-token-ca-cert-hash sha256:f7d9d6d7fb0f79600caee4adb2bb4ebecd543f71b533f22aadd4c88638417a63
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher
 
 
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf
export KUBECONFIG=/etc/kubernetes/admin.conf
 
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
kubeadm join 10.10.30.201:6443 --token 18g6j5.k9a5tja9qn5ko7yw \
    --discovery-token-ca-cert-hash sha256:f7d9d6d7fb0f79600caee4adb2bb4ebecd543f71b533f22aadd4c88638417a63
kubeadm join 10.10.30.201:6443 --token 18g6j5.k9a5tja9qn5ko7yw \
    --discovery-token-ca-cert-hash sha256:f7d9d6d7fb0f79600caee4adb2bb4ebecd543f71b533f22aadd4c88638417a63
kubeadm token list
kubeadm token list
kubeadm token create
kubeadm token create
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
   openssl dgst -sha256 -hex | sed 's/^.* //'
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
   openssl dgst -sha256 -hex | sed 's/^.* //'
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
kubectl get pods -A
kubectl get pods -A
 
kubectl get nodes
kubectl get nodes
https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml
https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml
kubectl apply -f https://www.kubebiz.com/raw/KubeBiz/Kubernetes%20Dashboard/v2.5.0/recommended.yaml
kubectl apply -f https://www.kubebiz.com/raw/KubeBiz/Kubernetes%20Dashboard/v2.5.0/recommended.yaml
apiVersion: v1
kind: Namespace #创建命名空间
metadata:
  name: kubernetes-dashboard
 
---
 
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
 
---
 
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
 
---
 
apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque
 
---
 
apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""
 
---
 
apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque
 
---
 
kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard
 
---
 
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]
 
---
 
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]
 
---
 
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard
 
---
 
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard
 
---
 
kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.5.0
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
 
---
 
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper
 
---
 
kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.7
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}
apiVersion: v1
kind: Namespace #创建命名空间
metadata:
  name: kubernetes-dashboard
 
---
 
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
 
---
 
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
 
---
 
apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque
 
---
 
apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""
 
---
 
apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque
 
---
 
kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard
 
---
 
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]
 
---
 
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]
 
---
 
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:

[招生]科锐逆向工程师培训(2024年11月15日实地,远程教学同时开班, 第51期)

最后于 2022-12-31 14:48 被阿蒙i编辑 ,原因:
收藏
免费 3
支持
分享
最新回复 (0)
游客
登录 | 注册 方可回帖
返回
//