uniapp开发app框架在提升开发效率中的独特优势与应用探索
893
2022-09-01
kubernets安装-Kubeadm for ubuntu
[TOC]
master 初始化
环境初始化
root@instance-0tow586x:~# apt install -y lrzsz vim #关闭swap swapoff -a rm -f /swap.img vim /etc/fstab # /swap.img #开启ip转发 vim /etc/sysctl.conf net.ipv4.ip_forward=1 #查看状态 sysctl -p ## 更改主机名 root@instance-0tow586x:~# echo "bd-ks-M1" >/etc/hostname root@instance-0tow586x:~# hostname `cat /etc/hostname` root@instance-0tow586x:~# hostname bd-ks-M1
安装docker
安装依赖
sudo apt-get update sudo apt-get -y install apt-transport-ca-certificates curl software-properties-common #安装GPG证书 curl -fsSL | sudo apt-key add - #写入软件源信息 sudo add-apt-repository "deb [arch=amd64] $(lsb_release -cs) stable" #更新并安装Docker-ce sudo apt-get -y update apt install -y docker-ce #安装docker-compose apt install -y docker-compose #将docker设置为开机自启 systemctl enable docker systemctl start docker #更改docker cgroupdriver vim /etc/docker/daemon.json { "exec-opts": ["native. cgroupdriverr=systemd"] } systemctl restart docker
安装k8s
#添加证书
curl | apt-key add -
#添加apt源
cat <
安装所需镜像
## 查看所需镜像 root@kubernetes-master:~# kubeadm config images list --kubernetes-version=v1.22.0 k8s.gcr.io/kube-apiserver:v1.22.0 k8s.gcr.io/kube-controller-manager:v1.22.0 k8s.gcr.io/kube-scheduler:v1.22.0 k8s.gcr.io/kube-proxy:v1.22.0 k8s.gcr.io/pause:3.5 k8s.gcr.io/etcd:3.5.0-0 k8s.gcr.io/coredns/coredns:v1.8.4 ##手动-镜像 docker pull registry--hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.22.0 docker pull registry--hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.22.0 docker pull registry--hangzhou.aliyuncs.com/google_containers/pause:3.5 docker pull registry--hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.22.0 docker pull registry--hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.22.0 docker pull registry--hangzhou.aliyuncs.com/google_containers/coredns:1.8.4 docker pull registry--hangzhou.aliyuncs.com/google_containers/etcd:3.5.0-0 ##为镜像重新打tag docker tag registry--hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.22.0 k8s.gcr.io/kube-apiserver:v1.22.0 docker tag registry--hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.22.0 k8s.gcr.io/kube-controller-manager:v1.22.0 docker tag registry--hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.22.0 k8s.gcr.io/kube-scheduler:v1.22.0 docker tag registry--hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.22.0 k8s.gcr.io/kube-proxy:v1.22.0 docker tag registry--hangzhou.aliyuncs.com/google_containers/pause:3.5 k8s.gcr.io/pause:3.5 docker tag registry--hangzhou.aliyuncs.com/google_containers/etcd:3.5.0-0 k8s.gcr.io/etcd:3.5.0-0 docker tag registry--hangzhou.aliyuncs.com/google_containers/coredns:1.8.4 k8s.gcr.io/coredns:1.8.4 docker tag registry--hangzhou.aliyuncs.com/google_containers/coredns:1.8.4 k8s.gcr.io/coredns:1.8.4
初始化master节点
初始化之前一定要确认节点安装的docker cgroupdriver 是systemd ,关闭的swap,以及kubelet处于启动状态。
kubeadm init --apiserver-advertise-address=192.168.64.4 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.22.0 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16 root@kubernetes-master:~# kubeadm init --apiserver-advertise-address=192.168.64.4 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.22.0 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16 [init] Using Kubernetes version: v1.22.0 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes-master kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.64.4] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [kubernetes-master localhost] and IPs [192.168.64.4 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [kubernetes-master localhost] and IPs [192.168.64.4 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 9.503473 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node kubernetes-master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node kubernetes-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: ogasa1.0jzmf6lwclbsb0fq [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.64.4:6443 --token ogasa1.0jzmf6lwclbsb0fq \ --discovery-token-ca-cert-hash sha256:ea147bf21b1f8ad881863b5e0eb2cf9ccec4a2015605486d2a6cf5ce999f6207 初始化完成之后会出现提示,Your Kubernetes control-plane has initialized successfully!,之后将后面的内容保存到一个文件中,先不添加节点检查集群状态是否正常。 kubectl get componentstatus
解决报错The connection to the server localhost:8080 was refused - did you specify the right host or port?
mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config
添加网络组件
flannel kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml calico kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml ##检查状态 kubectl get pods -n kube-system -l app=flannel #检查节点是否准备完成 kubectl get nodes ## 排查故障 kubectl describe pod calico-node-zp8bb -n kube-system #为node的角色打标签 #kubectl label node node01 node-role.kubernetes.io/node=node
node 初始化
node 初始化操作
环境初始化
root@instance-0tow586x:~# apt install -y lrzsz vim #关闭swap swapoff -a rm -f /swap.img vim /etc/fstab # /swap.img #开启ip转发 vim /etc/sysctl.conf net.ipv4.ip_forward=1 #查看状态 sysctl -p #更改主机名 root@instance-90v8moam:~# echo "bd-ks-S1" >/etc/hostname root@instance-90v8moam:~# hostname `cat /etc/hostname` root@instance-90v8moam:~# hostname bd-ks-S1
安装docker
安装依赖 sudo apt-get update sudo apt-get -y install apt-transport-ca-certificates curl software-properties-common
#安装GPG证书curl -fsSL | sudo apt-key add -
#写入软件源信息sudo add-apt-repository "deb [arch=amd64] $(lsb_release -cs) stable"
#更新并安装Docker-ceapt-get -y updateapt install -y docker-ce
#安装docker-composeapt install -y docker-compose#更改cgroupdrivervim /etc/docker/daemon.json{"exec-opts": ["native.cgroupdriver=systemd"]}
#将docker设置为开机自启systemctl enable dockersystemctl start docker
### 安装k8s ```bash #添加证书 curl | apt-key add - #添加apt源 vim /etc/apt/sources.list.d/kubernetes.list deb kubernetes-xenial main apt-get update #查看可安装版本 apt-cache madison kubelet #安装指定版本 apt-get install -y kubelet=1.22.0-00 kubeadm=1.22.0-00 kubectl=1.22.0-00 #设置开机启动 sudo systemctl enable kubelet && sudo systemctl start kubelet #添加配置文件到变量 echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
将node加入集群
##在master上执行 root@kubernetes-master:~# kubeadm token create --print-join-command kubeadm join 192.168.64.4:6443 --token j357yd.jzfderm7144bjf9r --discovery-token-ca-cert-hash sha256:ea147bf21b1f8ad881863b5e0eb2cf9ccec4a2015605486d2a6cf5ce999f6207 ## 在node上执行 kubeadm join 192.168.64.4:6443 --token j357yd.jzfderm7144bjf9r --discovery-token-ca-cert-hash sha256:ea147bf21b1f8ad881863b5e0eb2cf9ccec4a2015605486d2a6cf5ce999f6207
报错处理 "[kubelet-check] The HTTP call equal to 'curl -sSL failed with error: Get "dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy."
kubeadm reset rm -rf /etc/cni/net.d rm -rf $HOME/.kube/config rm -rf /etc/kubernetes/ echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile source /etc/profile echo '{"exec-opts": ["native.cgroupdriver=systemd"]}' | sudo tee /etc/docker/daemon.json systemctl daemon-reload systemctl restart docker systemctl restart kubelet kubeadm reset
完成之后在master查看 node
root@kubernetes-master:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubernetes-master Ready control-plane,master 81m v1.22.0
kubernetes-node1 Ready
版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。
发表评论
暂时没有评论,来抢沙发吧~