云计算容器编排技术 课件 单元3搭建Kubernetes仪表盘_第1页
云计算容器编排技术 课件 单元3搭建Kubernetes仪表盘_第2页
云计算容器编排技术 课件 单元3搭建Kubernetes仪表盘_第3页
云计算容器编排技术 课件 单元3搭建Kubernetes仪表盘_第4页
云计算容器编排技术 课件 单元3搭建Kubernetes仪表盘_第5页
已阅读5页,还剩141页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

Kubernetes高可用集群-基础环境目录01任务项目实施节点环境准备02任务学习目标【知识目标】●

掌握集群基础环境配置要点

【技能目标】●

能够基于搭建过程中出现的问题进行基础排错。1节点环境准备初始化实验环境1角色IP地址主机名安装的组件控制节点0Master-1apiserver、controller-manager、scheduler、etcd、docker、keepalived、nginx控制节点0Master-2apiserver、controller-manager、scheduler、etcd、docker、keepalived、nginx工作节点0Node-1kubelet、kube-proxy、docker、calico、coredns节点规划初始化实验环境1环境准备和说明软件名称软件版本LinuxOSCentOSLinuxrelease7.7.1908(Core)Dockerdocker-ce-20.10.6Kubernetes1.20.6Kubeadmkubeadm-1.20.6-0.x86_64etcd3.3.10calicov3.18.0初始化实验环境1环境准备和说明推荐配置:4vcpu+4Gmemory+50Gdisk最低配置:2vcpu+2Gmemory+50Gdisk网络:NAT本次实验配置如下:初始化实验环境1环境准备和说明k8s本地环境规划:podSubnet(pod网段)/16serviceSubnet(service网段):/162案例实施案例实施按照部署要求,修改三个节点的IP地址,以master1节点为例2配置静态IP[root@localhost~]#vim/etc/sysconfig/network-scripts/ifcfg-ens33TYPE=EthernetPROXY_METHOD=noneBROWSER_ONLY=noBOOTPROTO=static。。。ONBOOT=yesIPADDR=192.168.116.10PREFIX=24GATEWAY=192.168.116.2DNS1=14修改配置文件之后需要重启网络服务才能使配置生效[root@localhost~]#systemctlrestartnetwork案例实施2配置机器主机名在0上执行如下:hostnamectlset-hostnamemaster1&&bash在0上执行如下:hostnamectlset-hostnamemaster2&&bash在0上执行如下:hostnamectlset-hostnamenode1&&bash案例实施2配置主机hosts文件3个节点相互之间通过主机名互相访问,修改每台机器的/etc/hosts,增加如下3行:0master10master20node1案例实施2配置主机之间无密码登录生成秘钥[root@master1~]#ssh-keygen把本地生成的密钥文件和私钥文件拷贝到远程主机[root@master1~]#ssh-copy-idmaster1[root@master1~]#ssh-copy-idmaster2[root@master1~]#ssh-copy-idnode1案例实施2配置主机之间无密码登录生成秘钥[root@master2~]#ssh-keygen把本地生成的密钥文件和私钥文件拷贝到远程主机[root@master2~]#ssh-copy-idmaster1[root@master2~]#ssh-copy-idmaster2[root@master2~]#ssh-copy-idnode1案例实施2配置主机之间无密码登录生成秘钥[root@node1~]#ssh-keygen把本地生成的密钥文件和私钥文件拷贝到远程主机[root@node1~]#ssh-copy-idmaster1[root@node1~]#ssh-copy-idmaster2[root@node1~]#ssh-copy-idnode1案例实施2关闭交换分区swap三节点相同配置,以master1为例(1)临时关闭[root@master1~]#swapoff-a案例实施2关闭交换分区swap(2)永久关闭[root@master1~]#vi/etc/fstab#/etc/fstab#CreatedbyanacondaonMonJul2411:15:142023#Accessiblefilesystems,byreference,aremaintainedunder'/dev/disk'#Seemanpagesfstab(5),findfs(8),mount(8)and/orblkid(8)formoreinfo#UUID=ab12c361-6848-4035-bc98-bf8a242efa2a/xfsdefaults00UUID=628bfe49-1b47-40c7-9182-45383cfe840b/bootxfsdefaults00#UUID=5bfdd074-35c2-40ac-a57f-b41de7d81dd5swapswapdefaults00

1修改机器内核参数三节点执行相同配置,以master1节点为例[root@master1~]#modprobebr_netfilter[root@master1~]#echo"modprobebr_netfilter">>/etc/profile[root@master1~]#cat>/etc/sysctl.d/Kubernetes.conf<<EOFnet.bridge.bridge-nf-call-ip6tables=1net.bridge.bridge-nf-call-iptables=1net.ipv4.ip_forward=1EOF[root@master1~]#sysctl-p/etc/sysctl.d/Kubernetes.confdocker安装环境准备案例实施2关闭firewalld防火墙[root@master1~]#systemctlstopfirewalld;systemctldisablefirewalld[root@master2~]#systemctlstopfirewalld;systemctldisablefirewalld[root@node1~]#systemctlstopfirewalld;systemctldisablefirewalld案例实施2关闭selinux三节点做同样配置,以master1节点为例[root@master1~]#sed-i's/SELINUX=enforcing/SELINUX=disabled/g'/etc/selinux/config修改selinux配置文件之后,重启机器,selinux配置才能永久生效。[root@master1~]#getenforceDisabled#显示Disabled说明selinux已经关案例实施2配置时钟同步所有节点安装chrony服务。[root@master1~]#yuminstall-ychrony

[root@master2~]#yuminstall-ychrony

[root@node1~]#yuminstall-ychrony

案例实施2配置时钟同步master1节点修改/etc/chrony.conf文件,注释默认NTP服务器,指定上游公共NTP服务器,并允许其他节点同步时间。[root@master1~]#sed-i's/^server/#&/'/etc/chrony.conf[root@master1~]#cat>>/etc/chrony.conf<<EOFlocalstratum10servermasteriburstallowallEOF案例实施2配置时钟同步master1节点重启chronyd服务并设为开机启动,开启网络时间同步功能。[root@master1~]#systemctlenablechronyd&&systemctlrestartchronyd[root@master1~]#timedatectlset-ntptrue案例实施2配置时钟同步master2和node1节点修改/etc/chrony.conf文件,指定内网master1节点为上游NTP服务器,重启服务并设为开机启动。以node1为例[root@node1~]#sed-i's/^server/#&/'/etc/chrony.conf[root@node1~]#echoserver0iburst>>/etc/chrony.conf[root@node~]#systemctlenablechronyd&&systemctlrestartchronyd案例实施2开启ipvs配置ipvs模块,三个节点均需配置cat>/etc/sysconfig/modules/ipvs.modules<<EOF#!/bin/bashmodprobe--br_netfiltermodprobe--ip_vsmodprobe--ip_vs_rrmodprobe--ip_vs_wrrmodprobe--ip_vs_shmodprobe--nf_conntrack_ipv4EOF案例实施2开启ipvschmod755/etc/sysconfig/modules/ipvs.modules&&\bash/etc/sysconfig/modules/ipvs.modules&&\lsmod|grep-E"ip_vs|nf_conntrack_ipv4"输出结果如下:案例实施2安装基础包三节点安装基础软件包,以master1为例[root@master1~]#yuminstall-yyum-utilsdevice-mapper-persistent-datalvm2wgetnet-toolsnfs-utilslrzszgccgcc-c++makecmakelibxml2-developenssl-develcurlcurl-develunzipsudontplibaio-develwgetvimncurses-develautoconfautomakezlib-devel

python-develepel-releaseopenssh-serversocat

ipvsadmconntrackntpdatetelnetipvsadm案例实施2安装iptables如果用firewalld不习惯,可以安装iptables,在master1、master2、node1上操作:#yuminstalliptables-services-y禁用iptables#serviceiptablesstop&&systemctldisableiptables清空防火墙规则#iptables-F#iptables-X#iptables-Z#/usr/sbin/iptables-saveThankYOU!Kubernetes高可用集群-安装Docker目录01任务安装Docker服务环境准备02任务学习目标【知识目标】●

掌握yum的配置。【技能目标】●

能够安装Docker服务1环境准备1配置yum源三节点配置阿里云centos的YUM源,以Master1节点为例[root@master1~]#mv/etc/yum.repos.d/*/home[root@master1~]#curl-o/etc/yum.repos.d/CentOS-Base.repo/repo/Centos-7.repodocker安装环境准备1配置yum源三节点配置阿里云docker的YUM源,以Master1节点为例[root@master1~]#yum-config-manager--add-repo/docker-ce/linux/centos/docker-ce.repo[root@master1~]#sed-i's++/docker-ce+'/etc/yum.repos.d/docker-ce.repodocker安装环境准备1配置yum源三节点配置阿里云Kubernetes的YUM源,以Master1节点为例[root@master1~]#cat<<EOF>/etc/yum.repos.d/Kubernetes.repo[Kubernetes]name=Kubernetesbaseurl=/Kubernetes/yum/repos/Kubernetes-el7-x86_64/enabled=1gpgcheck=1repo_gpgcheck=1gpgkey=/Kubernetes/yum/doc/yum-key.gpg/Kubernetes/yum/doc/rpm-package-key.gpgEOFdocker安装环境准备2安装docker服务安装docker服务各节点部署docker服务,以Master1节点为例[root@master1~]#yuminstalldocker-ce-20.10.6docker-ce-cli-20.10.6containerd.io-y[root@master1~]#systemctlstartdocker&&systemctlenabledocker.service2安装docker-ce安装docker服务[root@master1~]#cat>/etc/docker/daemon.json<<EOF{"exec-opts":["native.cgroupdriver=systemd"],"registry-mirrors":[""]}EOF[root@master1~]#systemctldaemon-reload[root@master1~]#systemctlrestartdocker[root@master1~]#systemctlstatusdocker以master1节点为例,master2和node1节点重复上述操作。2配置镜像加速器和驱动ThankYOU!Kubernetes高可用集群-高可用配置目录01任务高可用配置软件包准备02任务学习目标【知识目标】●了解kubeadm工具的使用方法;●

掌握Kubernetes集群的高可用配置。【技能目标】●

能够利用keepalived+nginx实现k8sapiserver节点高可用;●

能够基于搭建过程中出现的问题进行基础排错。1软件包准备1安装软件包三节点安装初始化Kubernetes需要的软件包,并启动kubelet服务,设置开机自启,以master节点为例,代码如下:[root@master1~]#yuminstall-ykubelet-1.20.6kubeadm-1.20.6kubectl-1.20.6[root@master1~]#systemctlenablekubelet[root@master1~]#systemctlstartkubelet软件包准备1安装软件包查看kubelet服务状态:[root@master1~]#systemctlstatuskubelet输出结果如下:软件包准备2高可用配置通过keepalived+nginx实现apiserver节点高可用把epel.repo上传到master1的/etc/yum.repos.d目录下,#并拷贝到远程主机master2和node1上[root@master1~]#scp/etc/yum.repos.d/epel.repomaster2:/etc/yum.repos.d/[root@master1~]#scp/etc/yum.repos.d/epel.reponode1:/etc/yum.repos.d/2配置epel源通过keepalived+nginx实现apiserver节点高可用在master1和master2上做nginx主备安装[root@master1~]#yuminstallnginxkeepalived-y[root@master2~]#yuminstallnginxkeepalived-y2安装nginx和keepalived通过keepalived+nginx实现apiserver节点高可用[root@master1~]#vim/etc/nginx/nginx.confusernginx;worker_processesauto;error_log/var/log/nginx/error.log;pid/run/nginx.pid;include/usr/share/nginx/modules/*.conf;events{worker_connections1024;}2修改nginx配置文件#四层负载均衡,为两台Masterapiserver组件提供负载均衡stream{log_formatmain'$remote_addr$upstream_addr-[$time_local]$status$upstream_bytes_sent';access_log/var/log/nginx/k8s-access.logmain;upstreamk8s-apiserver{server0:6443;#Master1APISERVERIP:PORTserver0:6443;#Master2APISERVERIP:PORT}

server{listen16443;#由于nginx与master节点复用,这个监听端口不能是6443,否则会冲突proxy_passk8s-apiserver;}}通过keepalived+nginx实现apiserver节点高可用http{log_formatmain'$remote_addr-$remote_user[$time_local]"$request"''$status$body_bytes_sent"$http_referer"''"$http_user_agent""$http_x_forwarded_for"';access_log/var/log/nginx/access.logmain;sendfile

on;tcp_nopush

on;tcp_nodelay

on;keepalive_timeout

65;types_hash_max_size

2048;2修改nginx配置文件include/etc/nginx/mime.types;default_typeapplication/octet-stream;server{listen80default_server;server_name_;location/{}}}通过keepalived+nginx实现apiserver节点高可用在master2节点[root@master2keepalived]#yuminstallnginx-mod-stream-y2nginx增加stream模块通过keepalived+nginx实现apiserver节点高可用[root@master2~]#vim/etc/nginx/nginx.confusernginx;worker_processesauto;error_log/var/log/nginx/error.log;pid/run/nginx.pid;include/usr/share/nginx/modules/*.conf;events{worker_connections1024;}2修改nginx配置文件#四层负载均衡,为两台Masterapiserver组件提供负载均衡stream{log_formatmain'$remote_addr$upstream_addr-[$time_local]$status$upstream_bytes_sent';access_log/var/log/nginx/k8s-access.logmain;upstreamk8s-apiserver{server0:6443;#Master1APISERVERIP:PORTserver0:6443;#Master2APISERVERIP:PORT}

server{listen16443;#由于nginx与master节点复用,这个监听端口不能是6443,否则会冲突proxy_passk8s-apiserver;}}通过keepalived+nginx实现apiserver节点高可用http{log_formatmain'$remote_addr-$remote_user[$time_local]"$request"''$status$body_bytes_sent"$http_referer"''"$http_user_agent""$http_x_forwarded_for"';access_log/var/log/nginx/access.logmain;sendfile

on;tcp_nopush

on;tcp_nodelay

on;keepalive_timeout

65;types_hash_max_size

2048;2修改nginx配置文件include/etc/nginx/mime.types;default_typeapplication/octet-stream;server{listen80default_server;server_name_;location/{}}}通过keepalived+nginx实现apiserver节点高可用[root@master1~]#vim/etc/keepalived/keepalived.confglobal_defs{notification_email{acassen@firewall.locfailover@firewall.locsysadmin@firewall.loc}notification_email_fromAlexandre.Cassen@firewall.locsmtp_serversmtp_connect_timeout30router_idNGINX_MASTER}vrrp_scriptcheck_nginx{script"/etc/keepalived/check_nginx.sh"}2主keepalived配置vrrp_instanceVI_1{stateMASTERinterfaceens33#修改为实际网卡名virtual_router_id51#VRRP路由ID实例,每个实例是唯一的priority100#优先级,备服务器设置90advert_int1#指定VRRP心跳包通告间隔时间,默认1秒authentication{auth_typePASSauth_pass1111}virtual_ipaddress{99/24

#虚拟IP}track_script{check_nginx}}通过keepalived+nginx实现apiserver节点高可用2主keepalived配置准备上述配置文件中检查Nginx运行状态的脚本[root@master1~]#vim/etc/keepalived/check_nginx.sh#!/bin/bashcount=$(ps-ef|grepnginx|grepsbin|egrep-cv"grep|$$")if["$count"-eq0];thensystemctlstopkeepalivedfi[root@master1~]#chmod+x/etc/keepalived/check_nginx.sh通过keepalived+nginx实现apiserver节点高可用[root@master2~]#vim/etc/keepalived/keepalived.confglobal_defs{notification_email{acassen@firewall.locfailover@firewall.locsysadmin@firewall.loc}notification_email_fromAlexandre.Cassen@firewall.locsmtp_serversmtp_connect_timeout30router_idNGINX_MASTER}vrrp_scriptcheck_nginx{script"/etc/keepalived/check_nginx.sh"}2备keepalived配置vrrp_instanceVI_1{stateBACKUPinterfaceens33virtual_router_id51#VRRP路由ID实例,每个实例是唯一的priority90advert_int1authentication{auth_typePASSauth_pass1111}virtual_ipaddress{99/24}track_script{check_nginx}}通过keepalived+nginx实现apiserver节点高可用2备keepalived配置准备上述配置文件中检查Nginx运行状态的脚本[root@master2~]#vim/etc/keepalived/check_nginx.sh#!/bin/bashcount=$(ps-ef|grepnginx|grepsbin|egrep-cv"grep|$$")if["$count"-eq0];thensystemctlstopkeepalivedfi[root@master2~]#chmod+x/etc/keepalived/check_nginx.sh通过keepalived+nginx实现apiserver节点高可用2启动服务master1和master2操作相同,以master1为例[root@master1~]#systemctldaemon-reload[root@master1~]#systemctlstartnginx[root@master1~]#systemctlstartkeepalived[root@master1~]#systemctlenablenginxkeepalived[root@master1

~]#systemctlstatuskeepalived通过keepalived+nginx实现apiserver节点高可用2测试vip是否绑定成功通过ipaddr查看master1节点vip[root@master1~]#ipaddr1:lo:<LOOPBACK,UP,LOWER_UP>mtu65536qdiscnoqueuestate。。。2:ens33:<BROADCAST,MULTICAST,UP,LOWER_UP>mtu1500qdiscpfifo_faststateUPgroupdefaultqlen1000link/ether00:0c:29:91:f9:12brdff:ff:ff:ff:ff:ffinet192.168.116.10/24brd55scopeglobalnoprefixrouteens33valid_lftforeverpreferred_lftforeverinet99/24scopeglobalsecondaryens33valid_lftforeverpreferred_lftforeverinet6fe80::5dc1:f326:2132:7365/64scopelinknoprefixroutevalid_lftforeverpreferred_lftforever通过keepalived+nginx实现apiserver节点高可用2测试keepalived停掉master1上的nginx。Vip会漂移到master2[root@master1~]#servicenginxstop[root@master2~]#ipaddr1:lo:<LOOPBACK,UP,LOWER_UP>mtu65536qdiscnoqueuestate。。。2:ens33:<BROADCAST,MULTICAST,UP,LOWER_UP>mtu1500qdiscpfifo_faststateUPgroupdefaultqlen1000link/ether00:0c:29:79:9e:36brdff:ff:ff:ff:ff:ffinet0/24brd55scopeglobalnoprefixrouteens33valid_lftforeverpreferred_lftforeverinet99/24scopeglobalsecondaryens33valid_lftforeverpreferred_lftforeverinet6fe80::b6ef:8646:1cfc:3e0c/64scopelinknoprefixroutevalid_lftforeverpreferred_lftforever通过keepalived+nginx实现apiserver节点高可用2测试keepalived#启动master1上的nginx和keepalived,vip又会漂移回来[root@master1~]#systemctldaemon-reload[root@master1~]#systemctlstartnginx[root@master1~]#systemctlstartkeepalivedThankYOU!Kubernetes高可用集群部署目录01任务添加master节点master节点初始化02任务添加node节点03任务学习目标【知识目标】●掌握Kubernetes集群的初始化●

掌握Kubernetes集群master节点的添加;●

掌握Kubernetes集群node节点的添加;【技能目标】●

能够独立完成Kubernetes集群搭建;●

能够基于搭建过程中出现的问题进行基础排错。1master节点初始化master节点初始化在master1上创建kubeadm-config.yaml文件1[root@master1~]#vimkubeadm-config.yamlapiVersion:kubeadm.k8s.io/v1beta2kind:ClusterConfigurationkubernetesVersion:v1.20.6controlPlaneEndpoint:99:16443imageRepository:/google_containersapiServer:certSANs:-0-0-0-99networking:podSubnet:/16serviceSubnet:/16apiVersion:kubeproxy.config.k8s.io/v1alpha1kind:KubeProxyConfigurationmode:ipvsmaster节点初始化#把初始化k8s集群需要的离线镜像包上传到master1、master2、node1机器上,手动解压:[root@master1~]#dockerload-ik8simage-1-20-6.tar.gz[root@master2~]#dockerload-ik8simage-1-20-6.tar.gz[root@node1~]#dockerload-ik8simage-1-20-6.tar.gz1master节点初始化初始化[root@master1]#kubeadminit--configkubeadm-config.yaml显示如下,说明安装完成:1master节点初始化[root@master1~]#mkdir-p$HOME/.kube[root@master1~]#sudocp-i/etc/Kubernetes/admin.conf$HOME/.kube/config[root@master1~]#sudochown$(id-u):$(id-g)$HOME/.kube/config[root@master1~]#kubectlgetnodesNAMESTATUS

ROLES

AGE

VERSIONmaster1

NotReadycontrol-plane,master60sv1.20.61配置kubectl工具2添加master节点添加master节点在master2创建证书存放目录:[root@master2~]#cd/root&&mkdir-p/etc/kubernetes/pki/etcd&&mkdir-p~/.kube/2复制证书添加master节点#把master1节点的证书拷贝到master2上:[root@master1~]#scp/etc/kubernetes/pki/ca.crtmaster2:/etc/kubernetes/pki/[root@master1~]#scp/etc/kubernetes/pki/ca.keymaster2:/etc/kubernetes/pki/[root@master1~]#scp/etc/kubernetes/pki/sa.keymaster2:/etc/kubernetes/pki/[root@master1~]#scp/etc/kubernetes/pki/sa.pubmaster2:/etc/kubernetes/pki/[root@master1~]#scp/etc/kubernetes/pki/front-proxy-ca.crtmaster2:/etc/kubernetes/pki/[root@master1~]#scp/etc/kubernetes/pki/front-proxy-ca.keymaster2:/etc/kubernetes/pki/[root@master1~]#scp/etc/kubernetes/pki/etcd/ca.crtmaster2:/etc/kubernetes/pki/etcd/[root@master1~]#scp/etc/kubernetes/pki/etcd/ca.keymaster2:/etc/kubernetes/pki/etcd/2复制证书添加master节点在master1上查看加入节点的命令:[root@master1~]#kubeadmtokencreate--print-join-commandkubeadmjoin99:16443--token74utu6.e9lj5503t3gi1zob--discovery-token-ca-cert-hashsha256:5ad501fa00cb041581731cb67be67b4edf172f521ad3ec1296af131725998c03\--control-plane。。。2添加master节点拷贝在master2上执行结果如下:2master节点初始化[root@master2~]#mkdir-p$HOME/.kube[root@master2~]#sudocp-i/etc/kubernetes/admin.conf$HOME/.kube/config[root@master2~]#sudochown$(id-u):$(id-g)$HOME/.kube/config2配置kubectl工具master节点初始化在master1上查看集群状况:[root@master1~]#kubectlgetnodesNAMESTATUS

ROLES

AGE

VERSIONmaster1NotReadycontrol-plane,master7m45sv1.20.6master2NotReadycontrol-plane,master91s

v1.20.62检查集群状况3添加node节点添加node节点在master1上查看加入节点的命令:[root@master1~]#kubeadmtokencreate--print-join-commandkubeadmjoin99:16443--tokenul88gb.mcdxogu43a17ti1r--discovery-token-ca-cert-hashsha256:5ad501fa00cb041581731cb67be67b4edf172f521ad3ec1296af131725998c03。。。3添加node节点拷贝在node1上执行结果如下:3添加node节点#在master1上查看集群节点状况:[root@master1~]#kubectlgetnodesNAMESTATUSROLES

AGE

VERSIONmaster1NotReadycontrol-plane,master10m

v1.20.6master2NotReadycontrol-plane,master3m50sv1.20.6node1

NotReady<none>

14s

v1.20.63检查集群状况添加node节点把node1的ROLES变成work:[root@master1~]#kubectllabelnodenode1node-role.kubernetes.io/worker=worker3检查集群状况添加node节点无网络插件,coredns-*是pending状态[root@master1~]#kubectlgetpods-nkube-systemNAMEREADYSTATUSRESTARTSAGEcoredns-7f89b7bc75-chw6p0/1Pending010mcoredns-7f89b7bc75-kqp4k0/1Pending010m。。。3检查集群状况ThankYOU!Kubernetes高可用集群-使用测试目录01任务测试集群状态安装网络插件02任务学习目标【知识目标】●

掌握Kubernetes网络插件的使用;●

集群的基本使用。【技能目标】●

能够基于搭建过程中出现的问题进行基础排错。1安装网络插件初始化集群上传calico.yaml到master1上,使用yaml文件安装calico网络插件。自行下载calico文件地址:/manifests/calico.yaml[root@master1~]#kubectlapply-fcalico.yaml[root@master~]#kubectlgetpods-nkube-system1安装网络插件初始化集群输出结果如图下所示,说明网络插件已经正常工作。[root@master1~]#kubectlgetpod-nkube-systemNAME

READY

STATUS

RESTARTS

AGEcalico-kube-controllers-6949477b58-n6smq

1/1

Running

0

28scalico-node-2m9k6

1/1

Running

0

29scalico-node-l9pjt

1/1

Running

0

29scalico-node-s6ksv

1/1

Running

0

29scoredns-7f89b7bc75-chw6p

1/1

Running

0

15mcoredns-7f89b7bc75-kqp4k

1/1

Running

0

15m。。。1安装网络插件2测试集群状态测试集群状态把busybox-1-28.tar.gz上传到node1节点并手动解压:[root@node1~]#dockerload-ibusybox-1-28.tar.gz[root@master1~]#kubectlrunbusybox--imagebusybox:1.28--restart=Never--rm-itbusybox--sh/#pingPING(8):56databytes64bytesfrom8:seq=0ttl=127time=39.3ms2测试集群网络状况测试集群状态#把tomcat.tar.gz上传到node1,手动解压,上传tomcal相关文件到master1[root@node1~]#dockerload-itomcat.tar.gz[root@master1~]#kubectlapply-ftomcat.yaml[root@master1~]#kubectlgetpodsNAMEREADYSTATUSRESTARTSAGEdemo-pod1/1Running010s[root@master1~]#kubectlapply-ftomcat-service.yaml[root@master1~]#kubectlgetsvcNAMETYPECLUSTER-IPEXTERNAL-IPPORT(S)AGEkubernetesClusterIP<none>443/TCP67mtomcatNodePort48<none>8080:30080/TCP15s2测试集群健康状况测试集群状态在浏览器访问集群任意节点的ip:30080即可请求到浏览器2测试集群健康状况测试集群状态[root@master1~]#kubectlrunbusybox--imagebusybox:1.28--restart=Never--rm-itbusybox--shIfyoudon'tseeacommandprompt,trypressingenter./#nslookupkubernetes.default.svc.cluster.localServer:0Address1:0kube-dns.kube-system.svc.cluster.localName:kubernetes.default.svc.cluster.localAddress1:kubernetes.default.svc.cluster.local2测试coredns组件状况测试集群状态/#nslookuptomcat.default.svc.cluster.localServer:0Address1:0kube-dns.kube-system.svc.cluster.localName:tomcat.default.svc.cluster.localAddress1:48tomcat.default.svc.cluster.local48就是我们coreDNS的clusterIP,说明coreDNS配置好了。解析内部Service的名称,是通过coreDNS去解析的。0是创建的tomcat的serviceip2测试coredns组件状况ThankYOU!Kubernetes安装仪表盘目录01任务dashboard安装Kubernetes资源管理方式02任务学习目标【知识目标】●掌握kubernetes资源管理的四种方式;●

掌握dashboard的安装。【技能目标】●

能够安装kubernetes仪表盘功能。1Kubernetes资源管理方式1Kubernetes中管理集群中资源的方式通常有四种:命令行、YAML、API和仪表盘Kubernetes资源管理方式1命令行kubectl命令行kubectl:提供命令行方式管理Kubernetes资源,kubectl工具使用方便、便捷、快速管理集群资源,但是功能相对有限,部分操作无法支持,有一定的门槛。Kubernetes资源管理方式1yaml文件YAML:专门用来配置文件的语言,YAML资源定义方式功能齐备,能够定义Kubernetes的所有对象和资源,但是门槛较高,需要具备专业技术能力,使用排障难度大。Kubernetes资源管理方式1API管理KubernetesAPI管理接入:提供各种编程语言SDK接口,方便各种编程语言应用程序接入,API管理方式适配各种编程语言,如Java,Go,Python,C等,但是门槛较高,适用于开发人员。Kubernetes资源管理方式1仪表盘图形Kubernetes-dashboard:提供图形化管理界面,能够利用metric-server实现node和pod的监控,Kubernetes管理界面使用简单,便捷,适合大众,但是功能相对简单,功能原生。Kubernetes资源管理方式2dashboard安装dashboard安装2dashboard功能展示dashboard同时展示了Kubernetes集群中的资源状态信息和所有报错信息。dashboard安装2上传镜像把安装Kubernetes-dashboard需要的镜像上传到工作节点node1,手动解压:[root@node1~]#dockerload-idashboard_2_0_0.tar.gz[root@node1~]#dockerload-imetrics-scrapter-1-0-1.tar.gzdashboard安装2安装dashboard安装dashboard组件,在master1节点操作如下命令:[root@master1~]#kubectlapply-fKubernetes-dashboard.yaml查看dashboard的状态:[root@master1~]#kubectlgetpods-nKubernetes-dashboardNAME

READYSTATUSRESTARTSAGEdashboard-metrics-scraper-7445d59dfd-vxgcr1/1Running0

3m25sKubernetes-dashboard-54f5b6dc4b-l6s7q

1/1Running0

3m25sdashboard安装2查看dashboard前端的service[root@master1~]#kubectlgetsvc-nKubernetes-dashboardNAME

TYPE

CLUSTER-IPEXTERNAL-IPPORT(S)AGEdashboard-metrics-scraperClusterIP25<none>

8000/TCPKubernetes-dashboard

ClusterIP9<none>

443/TCP安装dashboarddashboard安装2修改servicetype修改servicetype类型变成NodePort[root@master1~]#kubectleditsvcKubernetes-dashboard-nKubernetes-dashboard把type:ClusterIP变成type:NodePort,保存退出即可。[root@master1~]#kubectlgetsvc-nKubernetes-dashboardNAME

TYPE

CLUSTER-IPEXTERNAL-IPPORT(S)

AGEdashboard-metrics-scraperClusterIP

25

<none>

8000/TCP

4m2sKubernetes-dashboard

NodePort9

<none>

443:32509/TCP4m2sdashboard安装2访问dashboard访问任何一个集群节点ip:32509端口即可访问Kubernetesdashboarddashboard安装2通过token访问dashboard通过token令牌访问dashboard,创建管理员token,具有查看任何空间的权限,可以管理所有资源对象。[root@master1~]#kubectlcreateclusterrolebindingdashboard-cluster-admin--clusterrole=cluster-admin--serviceaccount=Kubernetes-dashboard:Kubernetes-dashboardclusterrolebinding.rbac.authorization.k8s.io/dashboard-cluster-admincreateddashboard安装2通过token访问das

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论