Kubernetes部署

Kubernetes部署

docker官网

docker hub官网

kubernetes

Kubernetes

K8S 从懵圈到熟练:读懂此文,集群节点不下线!

K8S部署

docker从入门到实践

Helm官网

Helm - Kubernetes服务编排的利器

docker

安装所需包

yum install yum-utils device-mapper-persistent-data lvm2

设置yum源

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

把yum包更新到最新

yum update

查看所有仓库中所有docker版本,并选择特定版本安装

yum list docker-ce --showduplicates | sort -r

安装Docker

yum install docker-ce-19.03.1-3.el7.x86_64

创建目录/etc/docker

mkdir /etc/docker

设置daemon

cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
{
"registry-mirrors": ["https://registry.docker-cn.com"],
"live-restore": true
}
EOF

mkdir -p /etc/systemd/system/docker.service.d

重启Docker

systemctl daemon-reload
systemctl restart docker
systemctl enable docker

验证是否安装成功

docker version

kubernetes(k8s)

Kubernetes 是一个提供了基于容器的应用集群管理解决方案,Kubernetes 为容器化应用提供了部署运行、资源调度、服务发现和动态伸缩等一系列完整功能。

k8s至少需要一个master和一个node才能组成一个可用集群。

我们搭建一个master节点和三个node节点。

我们有三台服务器,ip和身份规划如下:

192.168.50.200 master node (mac:00:0c:29:72:ef:e1;product_uuid:2F4E4D56-B7E2-5B8F-CA10-7124D772EFE1)

192.168.50.201 node (mac:00:0c:29:cb:a3:98;product_uuid:01534D56-19B4-B527-937C-96249CCBA398)

192.168.50.202 node (mac:00:0c:29:b8:0d:85;product_uuid:6B014D56-53E4-36BE-D346-0E3826B80D85)

192.168.50.200即作master节点又作node节点。

三台服务器都是CentOS7.6.1810系统。

编辑本地hosts文件:

cat << EOF >>  /etc/hosts
192.168.50.200    jinyue.centos7.master
192.168.50.201    jinyue.centos7.node1
192.168.50.202    jinyue.centos7.node2
EOF

kubeadm

Installing kubeadm

kubeadm是Kubernetes官方提供的用于快速安装Kubernetes集群的工具,伴随Kubernetes每个版本的发布都会同步更新,kubeadm会对集群配置方面的一些实践做调整,通过实验kubeadm可以学习到Kubernetes官方在集群配置上一些新的最佳实践。

硬件配置

2 GB or more of RAM per machine 
2 CPUs or more

所需端口

Control-plane node(s)

Protocol Direction Port Range Purpose Used By
TCP Inbound 6443 Kubernetes API server All
TCP Inbound 2379-2380 etcd server client API kube-apiserver, etcd
TCP Inbound 10250 Kubelet API Self, Control plane
TCP Inbound 10251 kube-scheduler Self
TCP Inbound 10252 kube-controller-manager Self

Worker node(s)

Protocol Direction Port Range Purpose Used By
TCP Inbound 10250 Kubelet API Self, Control plane
TCP Inbound 30000-32767 NodePort Services** All
firewall-cmd --permanent --zone=public --add-port=6443/tcp
firewall-cmd --permanent --zone=public --add-port=2379-2380/tcp
firewall-cmd --permanent --zone=public --add-port=10250-10252/tcp
firewall-cmd --permanent --zone=public --add-port=30000-32767/tcp
systemctl restart firewalld.service

禁用swap

swapoff -a

验证
    free -m

修改 /etc/fstab 文件,注释掉 SWAP 的自动挂载

1 安装kubeadm

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# Set SELinux in permissive mode (effectively disabling it)
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

systemctl enable --now kubelet

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

获得kubeadm的最新版本

yum update

查看所需要的镜像

kubeadm config images list
    k8s.gcr.io/kube-apiserver:v1.15.2
    k8s.gcr.io/kube-controller-manager:v1.15.2
    k8s.gcr.io/kube-scheduler:v1.15.2
    k8s.gcr.io/kube-proxy:v1.15.2
    k8s.gcr.io/pause:3.1
    k8s.gcr.io/etcd:3.3.10
    k8s.gcr.io/coredns:1.3.1

Kubernetes相关组件的镜像都是托管在Google Container Registry上的。所以国内的服务器是没法直接安装GCR上的镜像的。

解决方法是 在可访问gcr.io/google-containers的国外节点中,把gcr.io的镜像拉到可访问的镜像仓库中,例如Docker Hub或者阿里云等。

阿里云镜像仓库需要注册登录访问地址如下:

https://cr.console.aliyun.com/cn-hangzhou/repositories

登录阿里云Docker Registry

docker login --username=midas_li@163.com registry.cn-beijing.aliyuncs.com

在国外的服务器节点创建脚本

vim push.sh
    #!/bin/bash
    set -o errexit
    set -o nounset
    set -o pipefail

    KUBE_VERSION=v1.15.2
    KUBE_PAUSE_VERSION=3.1
    ETCD_VERSION=3.3.10
    DNS_VERSION=1.3.1

    GCR_URL=gcr.io/google-containers
    ALIYUN_URL=registry.cn-beijing.aliyuncs.com/midas

    images=(kube-proxy:${KUBE_VERSION}
    kube-scheduler:${KUBE_VERSION}
    kube-controller-manager:${KUBE_VERSION}
    kube-apiserver:${KUBE_VERSION}
    pause:${KUBE_PAUSE_VERSION}
    etcd:${ETCD_VERSION}
    coredns:${DNS_VERSION})

    for imageName in ${images[@]} 
    do
          docker pull $GCR_URL/$imageName
          docker tag $GCR_URL/$imageName $ALIYUN_URL/$imageName
          docker push $ALIYUN_URL/$imageName
         docker rmi $ALIYUN_URL/$imageName
    done

运行push.sh

    sh push.sh

从阿里云上拉取镜像

docker pull registry.cn-beijing.aliyuncs.com/midas/kube-apiserver:v1.15.2
docker pull registry.cn-beijing.aliyuncs.com/midas/kube-controller-manager:v1.15.2
docker pull registry.cn-beijing.aliyuncs.com/midas/kube-scheduler:v1.15.2
docker pull registry.cn-beijing.aliyuncs.com/midas/kube-proxy:v1.15.2
docker pull registry.cn-beijing.aliyuncs.com/midas/pause:3.1
docker pull registry.cn-beijing.aliyuncs.com/midas/etcd:3.3.10
docker pull registry.cn-beijing.aliyuncs.com/midas/coredns:1.3.1

docker tag registry.cn-beijing.aliyuncs.com/midas/kube-apiserver:v1.15.2 k8s.gcr.io/kube-apiserver:v1.15.2
docker tag registry.cn-beijing.aliyuncs.com/midas/kube-controller-manager:v1.15.2 k8s.gcr.io/kube-controller-manager:v1.15.2
docker tag registry.cn-beijing.aliyuncs.com/midas/kube-scheduler:v1.15.2 k8s.gcr.io/kube-scheduler:v1.15.2
docker tag registry.cn-beijing.aliyuncs.com/midas/kube-proxy:v1.15.2 k8s.gcr.io/kube-proxy:v1.15.2
docker tag registry.cn-beijing.aliyuncs.com/midas/pause:3.1 k8s.gcr.io/pause:3.1
docker tag registry.cn-beijing.aliyuncs.com/midas/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
docker tag registry.cn-beijing.aliyuncs.com/midas/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
  • 全部节点执行以上设置

  • Master节点执行以下设置

2 初始化control-plane node

用kubeadm安装master

kubeadm init --pod-network-cidr=172.168.0.0/16

设置普通账户权限
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config

root用户
    export KUBECONFIG=/etc/kubernetes/admin.conf

node节点加入集群的命令:
    kubeadm join 192.168.50.200:6443 --token vy9s2b.edavlfaihmudizmr \
--discovery-token-ca-cert-hash sha256:4d37f8eb5cd48a039b55ea4530474a45f8631d255c5fcddbe28558ca7bbb1c21
    kubeadm join 10.1.8.40:6443 --token x1gwj9.m0v8y3p37xjjodqf \
--discovery-token-ca-cert-hash sha256:c6d91c81d52730a8a50d7e3ff474ab640cf8e80773f0dfec2dec29b29e0edb74

重新初始化

  • kubeadm reset

安装网络插件(Calico,当前版本v3.8)

Calico官网

CalicoGIT

cat <<EOF > /etc/NetworkManager/conf.d/calico.conf
[keyfile]
unmanaged-devices=interface-name:cali*;interface-name:tunl*
EOF

systemctl restart NetworkManager.service

打开防火墙端口:179(BGP通讯端口)

#firewall-cmd --permanent --zone=public --add-port=179/tcp  --永久添加端口<br>
#firewall-cmd --permanent --zone=public --list-ports  --查看开启端
#systemctl restart firewalld.service  //修改配置后需要重启服务使其生效

安装Calico

方法一、(已测试)

    mkdir /usr/local/src/calico
    cd /usr/local/src/calico
    wget https://docs.projectcalico.org/v3.8/manifests/calico.yaml
    vim calico.yaml
            - name: CALICO_IPV4POOL_CIDR
              value: "172.168.0.0/16"
    kubectl apply -f calico.yaml

方法二、(未测试)

    mkdir /usr/local/src/calico
    cd /usr/local/src/calico
    curl -LO https://docs.projectcalico.org/v3.8/manifests/calico-etcd.yaml

    vim calico-etcd.yaml

    30行
    修改etcd_endpoints
    etcd_endpoints: "https://192.168.50.200:2379"

    33行
    在kind: ConfigMap部分,修改etcd_ca、etcd_key、etcd_cert
    etcd_ca: "/calico-secrets/etcd-ca"
    etcd_cert: "/calico-secrets/etcd-cert"
    etcd_key: "/calico-secrets/etcd-key"

    305行
    - name: CALICO_IPV4POOL_IPIP
              value: "Never"

    317行
    - name: CALICO_IPV4POOL_CIDR
              value: "172.168.0.0/16"

    生成证书文件,获取证书内容
    cat /calico-secrets/etcd-ca | base64 -w 0
    cat /calico-secrets/etcd-cert | base64 -w 0
    cat /calico-secrets/etcd-key | base64 -w 0

    在kind: Secret下name: calico-etcd-secrets部分,设置相应的值
    etcd-key: LS0tLS1CRUdJTiB...VZBVEUgS0VZLS0tLS0=
    etcd-cert: LS0tLS1...ElGSUNBVEUtLS0tLQ==
    etcd-ca: LS0tLS1CRUdJTiBD...JRklDQVRFLS0tLS0=

       kubectl apply -f calico-etcd.yaml

验证网络插件

watch kubectl get pods --all-namespaces

kubectl get pod --all-namespaces -o wide

kubectl get nodes -o wide

查看集群状态

kubectl get cs

允许master节点部署pod

kubectl taint nodes --all node-role.kubernetes.io/master-

禁止master部署pod

kubectl taint nodes master node-role.kubernetes.io/master=true:NoSchedule

在master节点上执行命令查看集群中的节点

kubectl get nodes

从集群中移除Node

在master节点上执行:

kubectl drain node1 --delete-local-data --force --ignore-daemonsets
kubectl delete node node1

在node1上执行:

kubeadm reset

查看kubelet的日志

 journalctl -f -u kubelet

常用技巧

1、查看kubelet运行状态

[root@master ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Wed 2019-08-07 09:29:38 CST; 8min ago
     Docs: https://kubernetes.io/docs/
 Main PID: 879 (kubelet)
    Tasks: 22
   Memory: 114.4M
   CGroup: /system.slice/kubelet.service
           └─879 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup...

Aug 07 09:36:47 master kubelet[879]: E0807 09:36:47.211162     879 summary_sys_containers.go:47] Failed to get system container stats for "/system.slice/docker.service": failed to get ...
Aug 07 09:36:57 master kubelet[879]: E0807 09:36:57.224231     879 summary_sys_containers.go:47] Failed to get system container stats for "/system.slice/docker.service": failed to get ...

2、Failed to get system container stats for “/system.slice/docker.service”

解决方法:

vim /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
    添加
    Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd --runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice"
systemctl daemon-reload

3、通过bash获得 pod 中某个容器的TTY,相当于登录容器
kubectl exec -it -c -n – bash
例如:
kubectl exec -it redis-master-cln81 – bash

Helm

Helm 是 Deis 开发的一个用于 Kubernetes 应用的包管理工具,主要用来管理 Charts。有点类似于 Ubuntu 中的 APT 或 CentOS 中的YUM。

Helm 组件及相关术语

Helm

Helm 是一个命令行下的客户端工具。主要用于 Kubernetes 应用程序 Chart 的创建、打包、发布以及创建和管理本地和远程的 Chart 仓库。

Tiller

Tiller 是 Helm 的服务端,部署在 Kubernetes 集群中。Tiller用于接收 Helm 的请求,并根据 Chart 生成 Kubernetes 的部署文件( Helm 称为 Release ),然后提交给 Kubernetes 创建应用。Tiller 还提供了 Release 的升级、删除、回滚等一系列功能。

Chart

Helm 的软件包,采用 TAR 格式。类似于 APT 的 DEB 包或者YUM 的 RPM 包,其包含了一组定义 Kubernetes 资源相关的 YAML 文件。

Repoistory

Helm 的软件仓库,Repository 本质上是一个 Web 服务器,该服务器保存了一系列的 Chart 软件包以供用户下载,并且提供了一个该 Repository 的 Chart 包的清单文件以供查询。Helm 可以同时管理多个不同的 Repository。

Release

使用 helm install 命令在 Kubernetes 集群中部署的 Chart 称为 Release。

安装Helm

cd /usr/local/src
wget https://get.helm.sh/helm-v2.14.3-linux-amd64.tar.gz
tar -zxvf helm-v2.14.3-linux-amd64.tar.gz
mv linux-amd64/helm /usr/local/bin/helm

初始化

kubectl create serviceaccount --namespace kube-system tiller

kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller

helm init --upgrade --service-account tiller  --tiller-image registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.3  --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts 

验证

kubectl get pods --namespace kube-system

helm version

删除tiller

kubectl delete deployment tiller-deploy --namespace kube-system

出错后的解决方案

首先搜索tiller

docker search tiller

关注行:Mirror of https://gcr.io/kubernetes-helm/t

查错

kubectl get pod --all-namespaces

kubectl describe pod -n kube-system tiller-deploy-7664898d54-szmbg

要使用之前搜索到的镜像,编辑deploy,更改镜像地址:

kubectl edit deploy tiller-deploy -n kube-system

  spec:
      automountServiceAccountToken: true
  containers:
  - env:
    - name: TILLER_NAMESPACE
      value: kube-system
    - name: TILLER_HISTORY_MAX
      value: "0"
    image: sapcc/tiller:v2.14.3
    imagePullPolicy: IfNotPresen

 将 image: gcr.io/kubernetes-helm/tiller:v2.14.3替换成image: sapcc/tiller:v2.14.3

安装Kubernetes Dashboard

Helm方式

helm install stable/kubernetes-dashboard --name kubernetes-dashboard --namespace=kube-system

删除已经安装的包

helm delete --purge kubernetes-dashboard

查看已经安装的包

helm list


kubernetes-dashboard 1.8.3

wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.8.3/src/deploy/recommended/kubernetes-dashboard.yaml

K8s直接安装

wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

先生成证书

openssl genrsa -out dashboard.key 2048
openssl req -new -out dashboard.csr -key dashboard.key -subj '/CN=192.168.50.81'
openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt
openssl x509 -in dashboard.crt -text -noout
kubectl -n kube-system create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt
kubectl -n kube-system get secret | grep dashboard

然后再执行下行命令

kubectl create -f kubernetes-dashboard.yaml

删除dashboard

kubectl delete -f kubernetes-dashboard.yaml

问题1、ImagePullBackOff

kubectl -n kube-system describe pod kubernetes-dashboard-7d75c474bb-kzvtk

发现node1节点上没有镜像,在node1节点上执行如下命令

docker pull registry.cn-beijing.aliyuncs.com/midas/kubernetes-dashboard-amd64:v1.10.1
docker tag registry.cn-beijing.aliyuncs.com/midas/kubernetes-dashboard-amd64:v1.10.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1

查看Pod 的状态为running说明dashboard已经部署成功

kubectl get pod --namespace=kube-system -o wide

显示kubernetes-dashboard服务运行于node1上,所以访问地址为192.168.50.81

修改service通过NodePort方式访问dashboard

kubectl -n kube-system get svc

可以看到 kubernetes-dashboard service 在集群内部,无法再外部访问,为了方便访问,我们暴露kubernetes-dashboard 443端口给NodePort

kubectl -n kube-system edit svc kubernetes-dashboard

找到type字段,将ClusterIP,修改为NodePort;nodePort: 30001
spec:
  clusterIP: 10.110.6.32
  externalTrafficPolicy: Cluster
  ports:
    nodePort: 30001
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    app: kubernetes-dashboard
    release: kubernetes-dashboard
  sessionAffinity: None
  type: NodePort

此时kubernetes-dashboard的访问地址为:https://192.168.50.81:30001

通过token方式登录

kubectl -n kube-system describe secret $(kubectl get secret -n kube-system | grep kubernetes-dashboard-token | awk '{print $1}') | grep token:

错误解决:
登录如果提示以下错误,是因为官方的yaml文件里只定义了kube-system空间的role,dashboard需要操作整个集群,所以需要手动创建一个rbac给kubernetes-dashboard账号绑定cluster-admin权限,退出重新登录即可;

upload successful

vim rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system

kubectl apply -f rbac.yaml

安装Metrics Server插件

git地址

从v1.8开始,资源使用情况的监控可以通过Metrics API的形式获取,具体的组件为Metrics Server,用来替换之前的heapster,heapster从1.11开始逐渐被废弃。

在国外的服务器节点执行脚本

docker pull k8s.gcr.io/metrics-server-amd64:v0.3.3
docker tag k8s.gcr.io/metrics-server-amd64:v0.3.3 registry.cn-beijing.aliyuncs.com/midas/metrics-server-amd64:v0.3.3
docker push registry.cn-beijing.aliyuncs.com/midas/metrics-server-amd64:v0.3.3
docker rmi registry.cn-beijing.aliyuncs.com/midas/metrics-server-amd64:v0.3.3

docker pull k8s.gcr.io/addon-resizer:1.8.5
docker tag k8s.gcr.io/addon-resizer:1.8.5 registry.cn-beijing.aliyuncs.com/midas/addon-resizer:1.8.5
docker push registry.cn-beijing.aliyuncs.com/midas/addon-resizer:1.8.5
docker rmi registry.cn-beijing.aliyuncs.com/midas/addon-resizer:1.8.5

从阿里云上拉取镜像

docker pull registry.cn-beijing.aliyuncs.com/midas/metrics-server-amd64:v0.3.3
docker tag registry.cn-beijing.aliyuncs.com/midas/metrics-server-amd64:v0.3.3 k8s.gcr.io/metrics-server-amd64:v0.3.3

docker pull registry.cn-beijing.aliyuncs.com/midas/addon-resizer:1.8.5
docker tag registry.cn-beijing.aliyuncs.com/midas/addon-resizer:1.8.5 k8s.gcr.io/addon-resizer:1.8.5

wget https://raw.githubusercontent.com/kubernetes/kubernetes/release-1.15/cluster/addons/metrics-server/auth-delegator.yaml
wget https://raw.githubusercontent.com/kubernetes/kubernetes/release-1.15/cluster/addons/metrics-server/auth-reader.yaml
wget https://raw.githubusercontent.com/kubernetes/kubernetes/release-1.15/cluster/addons/metrics-server/metrics-apiservice.yaml
wget https://raw.githubusercontent.com/kubernetes/kubernetes/release-1.15/cluster/addons/metrics-server/metrics-server-deployment.yaml
wget https://raw.githubusercontent.com/kubernetes/kubernetes/release-1.15/cluster/addons/metrics-server/metrics-server-service.yaml
wget https://raw.githubusercontent.com/kubernetes/kubernetes/release-1.15/cluster/addons/metrics-server/resource-reader.yaml

vim metrics-server-deployment.yaml
    ......
    - name: metrics-server
      image: k8s.gcr.io/metrics-server-amd64:v0.3.3
      command:
      - /metrics-server
      - --metric-resolution=30s
      # These are needed for GKE, which doesn't support secure communication yet.
      # Remove these lines for non-GKE clusters, and when GKE supports token-based auth.
      #- --kubelet-port=10255
      #- --deprecated-kubelet-completely-insecure=true
      - --kubelet-insecure-tls    #表示不验证客户端证书
      - --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
    ......
    command:
      - /pod_nanny
      - --config-dir=/etc/config
      - --cpu=40m
      - --extra-cpu=0.5m
      - --memory=40Mi
      - --extra-memory=4Mi
      - --threshold=5
      - --deployment=metrics-server-v0.3.3
      - --container=metrics-server
      - --poll-period=300000
      - --estimator=exponential
      # Specifies the smallest cluster (defined in number of nodes)
      # resources will be scaled to.
      - --minClusterSize=2


vim resource-reader.yaml
    ......
    rules:
    - apiGroups:
      - ""
      resources:
      - pods
      - nodes
      - nodes/stats   #添加这行,否则无法获取其他节点数据。
      - namespaces
    ......

kubectl apply -f .

安装完成


技巧:

kubectl edit deploy -n kube-system metrics-server

    imagePullPolicy: Never #使用本地镜像,从不拉取
    args:
     - --kubelet-insecure-tls
     - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname

upload successful

删除安装的

kubectl delete -f . 

问题解决方法:

kubectl get pods --all-namespaces
kubectl logs -c metrics-server -n kube-system metrics-server-v0.3.3-59bdc846fc-tm4jz
kubectl logs -c metrics-server-nanny -n kube-system metrics-server-v0.3.3-59bdc846fc-tm4jz

测试

kubectl top nodes
kubectl top pods -n kube-system

其他

获得Token

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep tiller | awk '{print $1}')

身份认证

原文链接:Creating sample user

集群交互的时候少不了的是身份认证,使用 kubeconfig(即证书) 和 token 两种认证方式是最简单也最通用的认证方式,下面我使用kubeconfing来进行认证

使用kubeconfig文件来组织关于集群,用户,名称空间和身份验证机制的信息。使用 kubectl命令行工具对kubeconfig文件来查找选择群集并与群集的API服务器进行通信所需的信息。

默认情况下 kubectl使用的配置文件名称是在$HOME/.kube目录下 config文件,可以通过设置环境变量KUBECONFIG或者–kubeconfig指定其他的配置文件

查看系统的kubeconfig

kubectl config view

创建管理员账号

cd /usr/local/src
cat <<EOF >  /usr/local/src/admin-user.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
 name: admin-user
 namespace: kube-system

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system

EOF

执行

kubectl apply -f admin-user.yaml

获取Token

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

证书相关

Etcd和Kubernetes全部采用TLS通讯,所以先要生成TLS证书,证书生成工具采用cfssl

生成的 CA 证书和秘钥文件如下:

ca-key.pem
ca.pem
kubernetes-key.pem
kubernetes.pem
kube-proxy.pem
kube-proxy-key.pem
admin.pem
admin-key.pem

使用证书的组件如下:

etcd:使用 ca.pem、kubernetes-key.pem、kubernetes.pem (最后重命名为 etcd-ca  etcd-key  etcd-cert)
kube-apiserver:使用 ca.pem、kubernetes-key.pem、kubernetes.pem
kubelet:使用 ca.pem
kube-proxy:使用 ca.pem、kube-proxy-key.pem、kube-proxy.pem
kubectl:使用 ca.pem、admin-key.pem、admin.pem
kube-controller、kube-scheduler 当前需要和 kube-apiserver 部署在同一台机器上且使用非安全端口通信,故不需要证书。
kube-controller、kube-scheduler 也可以用ca.pem、kubernetes-key.pem、kubernetes.pem

CFSSL工具安装

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson

制作CA证书


可用如下两行命令生成模板
cfssl print-defaults config > ca-config.json #生成config模板
cfssl print-defaults csr > ca-csr.json    #生成csr模板

本处直接生成最终文件
cat > ca-config.json <<EOF
{
    "signing": {
        "default": {
            "expiry": "87600h"
        },
        "profiles": {
            "kubernetes": {
                "expiry": "87600h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            }
        }
    }
}
EOF

cat > ca-csr.json <<EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Liaoning",
            "ST": "Shenyang",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

生成 CA 证书和私钥
cfssl gencert -initca ca-csr.json | cfssljson -bare ca

创建kubernetes证书

cat > kubernetes-csr.json <<EOF
{
    "CN": "kubernetes",
    "hosts": [
      "192.168.50.200",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local",
      "127.0.0.1"
    ],
    "key": {
    "algo": "rsa",
    "size": 2048
    },
    "names": [
    {
        "C": "CN",
        "ST": "Shenyang",
        "L": "Liaoning",
        "O": "k8s",
        "OU": "System"
    }
    ]
}
EOF

该证书把etcd集群的所有ip,kubernetes master的所有ip都加入进去了,这样他们都能使用同一个密钥

生成Kubernetes证书和密钥
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

复制一份并改名,给ETCD使用

cp ca.pem  etcd-ca
cp kubernetes.pem etcd-cert
cp kubernetes-key.pem etcd-key

harbor私有仓库搭建

官方教程

创建证书

openssl genrsa -out ca.key 4096

openssl req -x509 -new -nodes -sha512 -days 3650 \
    -subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=10.1.8.40" \
    -key ca.key \
    -out ca.crt

openssl genrsa -out harbor.key 4096

openssl req -sha512 -new \
   -subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=10.1.8.40" \
   -key harbor.key \
   -out harbor.csr

cat > v3.ext <<-EOF
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth 
subjectAltName = @alt_names

[alt_names]
DNS.1=10.1.8.40
EOF    

openssl x509 -req -sha512 -days 3650 \
    -extfile v3.ext \
    -CA ca.crt -CAkey ca.key -CAcreateserial \
    -in harbor.csr \
    -out harbor.crt

mkdir /etc/harbor/ssl

cp harbor.crt /etc/harbor/ssl/harbor.crt
cp harbor.key /etc/harbor/ssl/harbor.key

在需要登陆的docker client端修改/lib/systemd/system/docker.service文件,在里面修改ExecStart那一行,增加

--insecure-registry=10.1.8.40

然后重启docker

systemctl daemon-reload
systemctl restart docker

在harbor服务器端修改 /etc/docker/daemon.json

{
"insecure-registries": ["10.1.8.40"]
}

修改后,同样运行

systemctl daemon-reload
systemctl restart docker

修改安装配置文件
vim harbor.yml

Fhostname: 10.1.8.40

https:
      # https port for harbor, default is 443
    port: 443
    # The path of cert and key files for nginx
    certificate: /etc/harbor/ssl/harbor.crt
    private_key: /etc/harbor/ssl/harbor.key

安装
./install.sh

命令

# docker pull registry.cn-beijing.aliyuncs.com/midas/kubernetes-dashboard-amd64:v1.10.1
# docker tag registry.cn-beijing.aliyuncs.com/midas/kubernetes-dashboard-amd64:v1.10.1 10.1.8.40/library/kubernetes-dashboard-amd64:v1.10.1
# docker push 10.1.8.40/library/kubernetes-dashboard-amd64:v1.10.1

   转载规则


《Kubernetes部署》 Midas Li 采用 知识共享署名 4.0 国际许可协议 进行许可。
 上一篇
centos7搭建Eureka-Server注册中心集群 centos7搭建Eureka-Server注册中心集群
centos7搭建Eureka-Server注册中心集群系统环境: CentOS 7.6.1810 java version “12.0.1” 2019-04-16 ideaIU-2019.1.3 各peer节点分别为: 1
2019-08-13
下一篇 
Kubernetes核心概念 Kubernetes核心概念
Kubernetes核心概念原文链接:Kubernetes核心概念总结 1、基础架构 1.1 Master   Master节点上面主要由四个模块组成:APIServer、scheduler、controller manager、etcd。
2019-08-06
  目录