kubernets -- 2 -- 实验部分--my version

@(Kubernets)

目录


本文介绍如何基于Centos7构建Kubernetes平台。​​

环境

回顶部

master:192.168.142.8
node01:192.168.142.16
node02:192.168.142.18

规划

回顶部

master:部署etcdkube-apiserverkube-controller-managerkube-scheduler 4个应用。

node01:部署docker,kubelet, kube-proxy  3个应用

node02:部署docker,kubelet, kube-proxy  3个应用;

准备

回顶部

hosts

回顶部

3台主机,分别修改/etc/hosts文件。

192.168.142.8 master
192.168.142.16 node01
192.168.142.18 node02
[root@master ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.142.8    master
192.168.142.16    node01
192.168.142.18    node02

firewall

回顶部

3台均需要设置firewalld防火墙默认区域trusted

#firewall-cmd --set-default-zone=trusted
[root@master ~]# firewall-cmd --set-default-zone=trusted 
success
[root@master ~]# firewall-cmd --list-all
trusted (default, active)
  interfaces: eno16777728
  sources: 
  services: 
  ports: 
  masquerade: no
  forward-ports: 
  icmp-blocks: 
  rich rules:

selinux

回顶部

3台均需要关闭selinux

[root@master ~]# getenforce 
Permissive

install docker

回顶部

node01node02 安装docker

node01:docker容器内网地址段:172.17.1.0/24

node02:docker容器内网地址段:172.17.2.0/24

#yum -y install docker
#systemctl enable docker
#systemctl start docker

查看dcoker版本:

[root@master ~]# docker version 
Client:
 Version:         1.10.3
 API version:     1.22
 Package version: docker-common-1.10.3-44.el7.centos.x86_64
 Go version:      go1.4.2
 Git commit:      9419b24-unsupported
 Built:           Fri Jun 24 12:09:49 2016
 OS/Arch:         linux/amd64

Server:
 Version:         1.10.3
 API version:     1.22
 Package version: docker-common-1.10.3-44.el7.centos.x86_64
 Go version:      go1.4.2
 Git commit:      9419b24-unsupported
 Built:           Fri Jun 24 12:09:49 2016
 OS/Arch:         linux/amd64

I.node01node02,实现容器间的互通

回顶部

安装net-tools,bridge-utils

回顶部

#yum install net-tools bridge-utils
[root@master ~]# rpm -qa | grep net-tools
net-tools-2.0-0.17.20131004git.el7.x86_64
[root@master ~]# rpm -qa | grep bridge-utils
bridge-utils-1.5-9.el7.x86_64

开启ip转发

回顶部

#vi  /etc/sysctl.conf添加 net.ipv4.ip_forward = 1
#sysctl -p
[root@master ~]# sysctl -w net.ipv4.ip_forward=1
net.ipv4.ip_forward = 1

node01node02新建网桥kbr0,并固定网桥ip

回顶部

node01操作:

[root@node01 ~]# systemctl stop docker
[root@node01 ~]# brctl addbr kbr0
[root@node01 ~]# ip link set dev docker0 down
[root@node01 ~]# ip link del dev docker0
[root@node01 ~]# cat /etc/sysconfig/network-scripts/ifcfg-kbr0
DEVICE=kbr0
ONBOOT=yes
BOOTPROTO=static
IPADDR=172.17.1.1
NETMASK=255.255.255.0
GATEWAY=172.17.1.0
USERCTL=no
TYPE=Bridge
IPV6INIT=no
[root@node01 ~]# cat /etc/sysconfig/network-scripts/route-eno16777736
172.17.2.0/24 via 192.168.142.18 dev eno16777736

注:route-eno16777736route-后的信息是node01主机的网卡设备名称(ifcofig查看)

node02

[root@node02 ~]# systemctl stop docker 
[root@node02 ~]# brctl addbr kbr0
[root@node02 ~]# ip link set dev docker0 down
[root@node02 ~]# ip link del dev docker0
[root@node02 ~]# vim /etc/sysconfig/network-scripts/ifcfg-kbr0
DEVICE=kbr0
ONBOOT=yes
BOOTPROTO=static
IPADDR=172.17.1.1
NETMASK=255.255.255.0
GATEWAY=172.17.1.0
USERCTL=no
TYPE=Bridge
IPV6INIT=no
[root@node02 ~]# cat /etc/sysconfig/network-scripts/route-eno16777728
172.17.1.0/24 via 192.168.142.16 dev eno16777728

修改docker配置文件,添加-b kbr0参数

回顶部

vi /etc/sysconfig/docker  
OPTIONS=’–selinux-enabled -b=kbr0′
[root@node01 ~]# cat /etc/sysconfig/docker | grep OPTIONS
OPTIONS='--selinux-enabled --log-driver=journald -b kbr0'
[root@node02 ~]# sed -i "s/OPTIONS='--selinux-enabled --log-driver=journald'/OPTIONS='--selinux-enabled --log-driver=journald -b kbr0'/g" /etc/sysconfig/docker
[root@node02 ~]# cat /etc/sysconfig/docker | grep OPTIONS
OPTIONS='--selinux-enabled --log-driver=journald -b kbr0'

重启系统

回顶部

启动后查看网卡状态和信息

network

[root@node01 ~]# systemctl status network
● network.service - LSB: Bring up/down networking
   Loaded: loaded (/etc/rc.d/init.d/network)
   Active: active (exited) since Wed 2016-08-24 20:22:39 CST; 1min 22s ago
     Docs: man:systemd-sysv-generator(8)
  Process: 1602 ExecStart=/etc/rc.d/init.d/network start (code=exited, status=0/SUCCESS)

Aug 24 20:22:38 node01 systemd[1]: Starting LSB: Bring up/down networking...
Aug 24 20:22:38 node01 network[1602]: Bringing up loopback interface:  Could not load file '/etc/sysconfig/net...g-lo'
Aug 24 20:22:38 node01 network[1602]: Could not load file '/etc/sysconfig/network-scripts/ifcfg-lo'
Aug 24 20:22:39 node01 network[1602]: Could not load file '/etc/sysconfig/network-scripts/ifcfg-lo'
Aug 24 20:22:39 node01 network[1602]: Could not load file '/etc/sysconfig/network-scripts/ifcfg-lo'
Aug 24 20:22:39 node01 network[1602]: [  OK  ]
Aug 24 20:22:39 node01 network[1602]: Bringing up interface eno16777736:  [  OK  ]
Aug 24 20:22:39 node01 network[1602]: Bringing up interface kbr0:  [  OK  ]
Aug 24 20:22:39 node01 systemd[1]: Started LSB: Bring up/down networking.
Hint: Some lines were ellipsized, use -l to show in full.

kbr0

[root@node01 ~]# ifconfig 
eno16777736: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.142.16  netmask 255.255.255.0  broadcast 192.168.142.255
        inet6 fe80::20c:29ff:fe68:c48d  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:68:c4:8d  txqueuelen 1000  (Ethernet)
        RX packets 254  bytes 23592 (23.0 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 166  bytes 25125 (24.5 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

kbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.1.1  netmask 255.255.255.0  broadcast 172.17.1.255
        inet6 fe80::c04f:82ff:fe8a:7c6c  prefixlen 64  scopeid 0x20<link>
        ether c2:4f:82:8a:7c:6c  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2  bytes 132 (132.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

查看kbr0网桥

回顶部

[root@node01 ~]# brctl show
bridge name    bridge id        STP enabled    interfaces
kbr0        8000.000000000000    no        
[root@node01 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.142.2   0.0.0.0         UG    100    0        0 eno16777736
172.17.1.0      0.0.0.0         255.255.255.0   U     425    0        0 kbr0
172.17.2.0      192.168.142.18  255.255.255.0   UG    100    0        0 eno16777736
192.168.142.0   0.0.0.0         255.255.255.0   U     100    0        0 eno16777736
[root@node02 ~]# brctl show
bridge name    bridge id        STP enabled    interfaces
kbr0        8000.000000000000    no        
[root@node02 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.142.2   0.0.0.0         UG    100    0        0 eno16777728
172.17.1.0      192.168.142.16  255.255.255.0   UG    100    0        0 eno16777728
172.17.1.0      0.0.0.0         255.255.255.0   U     425    0        0 kbr0
192.168.142.0   0.0.0.0         255.255.255.0   U     100    0        0 eno16777728

查看docker服务启动状态

回顶部

[root@node01 ~]# systemctl status docker | grep Active
   Active: active (running) since Wed 2016-08-24 20:22:59 CST; 5min ago
[root@node02 ~]# systemctl status docker | grep Active
   Active: active (running) since Wed 2016-08-24 20:22:48 CST; 5min ago

验证两宿主机容器间互通性

node01宿主机运行一个容器

[root@node01 ~]# docker images 
REPOSITORY                              TAG                 IMAGE ID            CREATED             SIZE
docker.io/jasonperma/centos6_http_ssh   latest              2460b63219d7        3 weeks ago         291.9 MB
centos                                  http                b097bfe56b56        4 weeks ago         291.7 MB
docker.io/centos                        latest              50dae1ee8677        5 weeks ago         196.7 MB
docker.io/centos                        centos6             cf2c3ece5e41        7 weeks ago         194.6 MB
[root@node01 ~]# docker run -it  docker.io/centos:latest 
[root@603cd22b65e6 /]# yum -y install net-tools
[root@603cd22b65e6 /]# ifconfig 
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.17.1.2  netmask 255.255.255.0  broadcast 0.0.0.0
        inet6 fe80::42:acff:fe11:102  prefixlen 64  scopeid 0x20<link>
        ether 02:42:ac:11:01:02  txqueuelen 0  (Ethernet)
        RX packets 2418  bytes 13729004 (13.0 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2090  bytes 117412 (114.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

node02宿主机运行一个容器

[root@node02 ~]# docker run -it docker.io/centos:latest 
[root@c02187ee8ece /]# yum -y install net-tools
...//省略
[root@c02187ee8ece /]# ifconfig            
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.17.2.2  netmask 255.255.255.0  broadcast 0.0.0.0
        inet6 fe80::42:acff:fe11:202  prefixlen 64  scopeid 0x20<link>
        ether 02:42:ac:11:02:02  txqueuelen 0  (Ethernet)
        RX packets 3867  bytes 13807475 (13.1 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2950  bytes 162908 (159.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

容器间互ping

[root@c02187ee8ece /]# ping 172.17.1.2
PING 172.17.1.2 (172.17.1.2) 56(84) bytes of data.
64 bytes from 172.17.1.2: icmp_seq=1 ttl=62 time=0.525 ms
64 bytes from 172.17.1.2: icmp_seq=2 ttl=62 time=0.517 ms
^C
--- 172.17.1.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.517/0.521/0.525/0.004 ms
[root@603cd22b65e6 /]# ping 172.17.2.2
PING 172.17.2.2 (172.17.2.2) 56(84) bytes of data.
64 bytes from 172.17.2.2: icmp_seq=1 ttl=62 time=0.918 ms
64 bytes from 172.17.2.2: icmp_seq=2 ttl=62 time=0.803 ms
^C
--- 172.17.2.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1005ms
rtt min/avg/max/mdev = 0.803/0.860/0.918/0.064 ms
注意你上面`node02`的`kbro`配置文件有错误,就是`IP地址`,现在改成下面的,并且重启了`node02`
[root@node02 ~]# cat /etc/sysconfig/network-scripts/ifcfg-kbr0 
DEVICE=kbr0
ONBOOT=yes
BOOTPROTO=static
IPADDR=172.17.2.1
NETMASK=255.255.255.0
GATEWAY=172.17.2.0
USERCTL=no
TYPE=Bridge
IPV6INIT=no

新增宿主机,需修改各宿主机的route配置文件,如果有很多台宿主机,就显示很麻烦。

现有一些开源动态路由发现软件,如QuaggaZebra等,来满足这一需求.

跨机器的容器互连Kubernetes的官方网站也提供了几种备选方案,如 L2网络, Flannel, OpenVSwitch

II.master(192.168.142.8)部署应用

回顶部

部署etcd

安装etcd

回顶部

[root@master ~]# yum list | grep etcd
etcd.x86_64                                2.3.7-2.el7                 @extras
[root@master ~]# yum -y install etcd

查看etcd配置文件

回顶部

[root@master ~]# rpm -qc etcd
/etc/etcd/etcd.conf

修改etcd配置文件

回顶部

[root@master ~]# vim /etc/etcd/etcd.conf 
[root@master ~]# cat /etc/etcd/etcd.conf | grep -v ^# | grep -v ^$
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://192.168.142.8:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.142.8:2379"

etcd 配置选项解释

回顶部

选项 作用
ETCD_NAME=default etcd节点名称,默认名称为default,这个名字后面会用到
ETCD_DATA_DIR="/var/lib/etcd/default.etcd" etcd存储数据的目录
ETCD_LISTEN_CLIENT_URLS="http://192.168.56.2:2379" client通信端口
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.56.2:2379" 广播给客户端使用的URL

启动etcd服务

回顶部

[root@master ~]# systemctl enable etcd.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
[root@master ~]# systemctl start etcd.service 
[root@master ~]# systemctl status etcd.service 
● etcd.service - Etcd Server
   Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2016-08-24 21:00:30 CST; 12s ago
 Main PID: 5958 (etcd)
   CGroup: /system.slice/etcd.service
           └─5958 /usr/bin/etcd --name=default --data-dir=/var/lib/etcd/default.etcd --listen-client-urls=http://19...

查看服务端口

回顶部

[root@master ~]# netstat -ntpl | grep etcd
tcp        0      0 192.168.142.8:2379      0.0.0.0:*               LISTEN      5958/etcd           
tcp        0      0 127.0.0.1:2380          0.0.0.0:*               LISTEN      5958/etcd           
tcp        0      0 127.0.0.1:7001          0.0.0.0:*               LISTEN      5958/etcd

III.部署k8s-master组件

回顶部

apiserver+controller-manager+scheduler

[root@master ~]# yum list | grep kube
cockpit-kubernetes.x86_64                  0.114-2.el7.centos          extras   
kubernetes.x86_64                          1.2.0-0.13.gitec7364b.el7   extras   
kubernetes-client.x86_64                   1.2.0-0.13.gitec7364b.el7   extras   
kubernetes-master.x86_64                   1.2.0-0.13.gitec7364b.el7   extras   
kubernetes-node.x86_64                     1.2.0-0.13.gitec7364b.el7   extras   
kubernetes-unit-test.x86_64                1.2.0-0.13.gitec7364b.el7   extras
[root@master ~]# yum -y install kubernetes-master
[root@master ~]# rpm -qa | grep kube
kubernetes-master-1.2.0-0.13.gitec7364b.el7.x86_64
kubernetes-client-1.2.0-0.13.gitec7364b.el7.x86_64

查看master kubernets配置文件

回顶部

[root@master ~]# cd /etc/kubernetes/
[root@master kubernetes]# ls -l
total 16
-rw-r--r--. 1 root root 767 Aug  4 20:59 apiserver
-rw-r--r--. 1 root root 655 Aug  4 20:59 config
-rw-r--r--. 1 root root 189 Aug  4 20:59 controller-manager
-rw-r--r--. 1 root root 111 Aug  4 20:59 scheduler

修改master kubernets配置文件

回顶部

/etc/kubernetes/config
[root@master kubernetes]# vim /etc/kubernetes/config 
[root@master kubernetes]# grep -v ^# /etc/kubernetes/config | grep -v ^$
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://192.168.142.8:8080"

kubernets config 配置项解释

回顶部

选项 作用
KUBE_LOGTOSTDERR 日志设置
KUBE_LOG_LEVEL 日志级别设置
KUBE_ALLOW_PRIV 是否允许运行特权容器
KUBE_MASTER 主节点的地址,主要为replication controllerschedulerkubelet可以顺利找到apiserver

修改master apiserver配置文件

回顶部

/etc/kubernetes/apiserver

注:这里需要注意原来KUBE_ADMISSION_CONTROL默认包含的ServiceAccount要删掉,不然启动API server的时候会报错

[root@master ~]# vim /etc/kubernetes/apiserver 
[root@master ~]# grep -v ^# /etc/kubernetes/apiserver | grep -v ^$
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_API_PORT="--port=8080"
KUBELET_PORT="--kubelet-port=10250"
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.142.8:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
KUBE_API_ARGS="--secure-port=0"

apiserver 配置项解释

回顶部

选项 作用
KUBE_API_ADDRESS 监听的接口,如果配置为127.0.0.1则只监听localhost,配置为0.0.0.0会监听所有接口,这里配置为0.0.0.0
KUBE_API_PORT="--port=8080" apiserver的监听端口,默认8080,不用修改。
KUBELET_PORT="--kubelet_port=10250" kubelet监听的端口,默认10250,无需修改
KUBE_ETCD_SERVERS 指定etcd节点的地址
KUBE_SERVICE_ADDRESSES 这个是设置今后运行Service所在的ip网段
KUBE_API_ARGS=”--secure-port=0” 默认是要求https安全通信,”--secure-port=0”则不要求https安全通信

修改master controller-manager配置文件

回顶部

/etc/kubernetes/controller-manager
[root@master ~]# grep -v ^# /etc/kubernetes/controller-manager | grep -v ^$
KUBE_CONTROLLER_MANAGER_ARGS=""

启动kubernets服务

回顶部

[root@master ~]# systemctl enable kube-apiserver.service kube-controller-manager.service kube-scheduler.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
[root@master ~]# systemctl start kube-apiserver.service kube-controller-manager.service kube-scheduler.service 
[root@master ~]# systemctl status kube-apiserver.service kube-controller-manager.service kube-scheduler.service

查看kube服务端口

回顶部

[root@master ~]# netstat -ntpl | grep kube
tcp6       0      0 :::10251                :::*                    LISTEN      6509/kube-scheduler 
tcp6       0      0 :::10252                :::*                    LISTEN      6507/kube-controlle 
tcp6       0      0 :::8080                 :::*                    LISTEN      6504/kube-apiserver

III.部署node主机

回顶部

node01

修改node01 hosts文件

回顶部

[root@node01 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.142.8    master
192.168.142.16    node01
192.168.142.18    node02

部署k8s-node组件

回顶部

[root@node01 ~]# yum list | grep kube
cockpit-kubernetes.x86_64                  0.114-2.el7.centos          extras   
kubernetes.x86_64                          1.2.0-0.13.gitec7364b.el7   extras   
kubernetes-client.x86_64                   1.2.0-0.13.gitec7364b.el7   extras   
kubernetes-master.x86_64                   1.2.0-0.13.gitec7364b.el7   extras   
kubernetes-node.x86_64                     1.2.0-0.13.gitec7364b.el7   extras   
kubernetes-unit-test.x86_64                1.2.0-0.13.gitec7364b.el7   extras   
[root@node01 ~]# yum -y install kubernetes-node

查看node01 kubernets配置文件

回顶部

[root@node01 ~]# cd /etc/kubernetes/
[root@node01 kubernetes]# ls -l
total 12
-rw-r--r-- 1 root root 655 Aug  4 20:59 config
-rw-r--r-- 1 root root 615 Aug  4 20:59 kubelet
-rw-r--r-- 1 root root 103 Aug  4 20:59 proxy

修改node01 kubernets配置文件

回顶部

/etc/kubernetes/config
[root@node01 kubernetes]# vim /etc/kubernetes/config 
[root@node01 kubernetes]# grep -v "^#" /etc/kubernetes/config | grep -v "^$"
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://192.168.142.8:8080"

修改node1 kubelet配置文件

回顶部

/etc/kubernetes/kubelet
[root@node01 kubernetes]# vim /etc/kubernetes/kubelet 
[root@node01 kubernetes]# grep -v "^#" /etc/kubernetes/kubelet | grep -v "^$"
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname-override=192.168.142.16"
KUBELET_API_SERVER="--api-servers=http://192.168.142.8:8080"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS=""

kublet 配置项解释

回顶部

KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"作用k8s创建pod的时候需要起一个基本容器,所以node节点要能连网。

注:node02KUBELET_HOSTNAME="--hostname-override=192.168.142.8",即不同node节点只需要更改KUBELET_HOSTNAMEnodehostnameip即可

启动kubelet ,kube-proxy,docker服务

回顶部

[root@node01 kubernetes]# systemctl enable kubelet.service kube-proxy.service docker
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
[root@node01 kubernetes]# systemctl start kubelet.service kube-proxy.service docker
[root@node01 kubernetes]# systemctl status kubelet.service kube-proxy.service docker

node01查看kube端口

回顶部

[root@node01 kubernetes]# netstat -nptl | grep kube
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      30661/kubelet       
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      30664/kube-proxy    
tcp6       0      0 :::10250                :::*                    LISTEN      30661/kubelet       
tcp6       0      0 :::10255                :::*                    LISTEN      30661/kubelet       
tcp6       0      0 :::4194                 :::*                    LISTEN      30661/kubelet

node02

回顶部

node02 hosts

回顶部

[root@node02 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.142.8    master
192.168.142.16    node01
192.168.142.18    node02

node02 kubernetes-node

回顶部

[root@node02 ~]# yum -y install kubernetes-node
[root@node02 ~]# rpm -qa | grep kubernetes
kubernetes-node-1.2.0-0.13.gitec7364b.el7.x86_64
kubernetes-client-1.2.0-0.13.gitec7364b.el7.x86_64

node02 kubernetes 配置文件

回顶部

[root@node02 ~]# grep -v "^#" /etc/kubernetes/config | grep -v "^$"
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://192.168.142.8:8080"

node02 kubelet配置文件

回顶部

[root@node02 ~]# grep -v "^#" /etc/kubernetes/kubelet | grep -v "^$"
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname-override=192.168.142.18"
KUBELET_API_SERVER="--api-servers=http://192.168.142.8:8080"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS=""

启动服务node02 kubelet kube-proxy docker

回顶部

[root@node02 ~]# systemctl enable kubelet.service kube-proxy.service docker
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
[root@node02 ~]# systemctl start kubelet.service kube-proxy.service docker
[root@node02 ~]# systemctl status kubelet.service kube-proxy.service docker

node02 查看kube端口

回顶部

[root@node02 ~]# netstat -nptl | grep kube
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      3606/kubelet        
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      3613/kube-proxy     
tcp6       0      0 :::10250                :::*                    LISTEN      3606/kubelet        
tcp6       0      0 :::10255                :::*                    LISTEN      3606/kubelet        
tcp6       0      0 :::4194                 :::*                    LISTEN      3606/kubelet

调试

检查node状态

回顶部

[root@master ~]# kubectl get nodes
NAME             STATUS    AGE
192.168.142.16   Ready     14m
192.168.142.18   Ready     2m
[root@master ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"ec7364b6e3b155e78086018aa644057edbe196e5", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"ec7364b6e3b155e78086018aa644057edbe196e5", GitTreeState:"clean"}

Kubernetes管理容器如果是第一次操作,可能会有一定的等待时间,这是因为第一次下载images需要一段时间。如果本地没有docker registry,要确保节点能访问互联网,所以我们可以搭建一个私有仓库,由私有仓库提供所需要的镜像,

IV.私有registry

回顶部

本实验环境中用kubernetes同时作为registry

拉取本地私有仓库registry,查看registry镜像

开启路由转发

回顶部

40

使修改生效

回顶部

41

下载registry

回顶部

#docker pull registry
#docker images

42

[root@master ~]# docker pull registry
[root@master ~]# docker images 
REPOSITORY                              TAG                 IMAGE ID            CREATED             SIZE
docker.io/jasonperma/centos6_http_ssh   latest              2460b63219d7        3 weeks ago         291.9 MB
docker.io/haproxy                       latest              f6bd6638cdfe        3 weeks ago         139.1 MB
docker.io/registry                      latest              c6c14b3960bd        3 weeks ago         33.28 MB
centos                                  http                b097bfe56b56        4 weeks ago         291.7 MB
centos7                                 latest              50dae1ee8677        5 weeks ago         196.7 MB
docker.io/centos                        latest              50dae1ee8677        5 weeks ago         196.7 MB
docker.io/centos                        centos6             cf2c3ece5e41        7 weeks ago         194.6 MB
docker.io/eeacms/haproxy                latest              ecfe1805af05        8 weeks ago         195.5 MB
注意:这里出现了`404 Page not found`后面用的是老师提供的`registry`的镜像

基于私有仓库镜像运行容器

回顶部

默认情况下,会将仓库存放于容器的/tmp/registry目录下,这样如果容器被删除,则存放于容器中的镜像也会丢失,所以我们一般情况下会指定本地一个目录挂载到容器的/tmp/registry下, 两个目录下都有!

registry的默认存储路径是/tmp/registry,只是个临时目录,一段时间之后就会消失

所以使用-v参数,指定个本地持久的路径,

[root@master ~]# mkdir -pv /opt/data/registry
mkdir: created directory ‘/opt/data’
mkdir: created directory ‘/opt/data/registry’
[root@master ~]# docker run -d -p 5000:5000 --name registry --restart=always -v /opt/data/registry/:/tmp/registry docker.io/registry:latest 
aaaf3eb9a5af935f4351d8e3f4b5d95d02c3cfec66e22a77296b84c88a23d134

本地访问私有仓库

回顶部

[root@master ~]# curl 127.0.0.1:5000/v1/search
{"num_results": 0, "query": "", "results": []}[root@master ~]#

私有仓库为空,没有提交新镜像到仓库中

Docker HUB 上拉取一个镜像测试

[root@master ~]# docker images 
REPOSITORY                              TAG                 IMAGE ID            CREATED             SIZE
docker.io/jasonperma/centos6_http_ssh   latest              2460b63219d7        3 weeks ago         291.9 MB
docker.io/haproxy                       latest              f6bd6638cdfe        3 weeks ago         139.1 MB
registry                                2                   c6c14b3960bd        3 weeks ago         33.28 MB
centos                                  http                b097bfe56b56        4 weeks ago         291.7 MB
centos7                                 latest              50dae1ee8677        5 weeks ago         196.7 MB
docker.io/centos                        latest              50dae1ee8677        5 weeks ago         196.7 MB
docker.io/centos                        centos6             cf2c3ece5e41        7 weeks ago         194.6 MB
docker.io/eeacms/haproxy                latest              ecfe1805af05        8 weeks ago         195.5 MB
docker.io/registry                      latest              bca04f698ba8        7 months ago        422.8 MB

基础镜像打标签

回顶部

[root@master ~]# docker tag docker.io/centos:latest 192.168.142.8:5000/centos:centos7
[root@master ~]# docker images 
REPOSITORY                              TAG                 IMAGE ID            CREATED             SIZE
docker.io/jasonperma/centos6_http_ssh   latest              2460b63219d7        3 weeks ago         291.9 MB
docker.io/haproxy                       latest              f6bd6638cdfe        3 weeks ago         139.1 MB
registry                                2                   c6c14b3960bd        3 weeks ago         33.28 MB
centos                                  http                b097bfe56b56        4 weeks ago         291.7 MB
192.168.142.8:5000/centos               centos7             50dae1ee8677        5 weeks ago         196.7 MB
centos7                                 latest              50dae1ee8677        5 weeks ago         196.7 MB
docker.io/centos                        latest              50dae1ee8677        5 weeks ago         196.7 MB
docker.io/centos                        centos6             cf2c3ece5e41        7 weeks ago         194.6 MB
docker.io/eeacms/haproxy                latest              ecfe1805af05        8 weeks ago         195.5 MB
docker.io/registry                      latest              bca04f698ba8        7 months ago        422.8 MB

修改docker配置文件,指定私有仓库url

回顶部

#vi /etc/sysconfig/docker

注:

2016/08/10 11:01:17 Error: Invalid registry endpoint
https://192.168.230.3:5000/v1/:Get
dial tcp 192.168.230.3*:5000*: connection refused. If this private
registry supports only HTTP or HTTPS with an unknown CA certificate,
please add `--insecure-registry 192.168.230.3*:5000` to the daemon's
arguments. In the case of HTTPS, if you have access to the registry's CA
certificate, no need for the flag; simply place the CA certificate at
/etc/docker/certs.d/192.168.230.3*:5000/ca.crt 11

非https私有仓库

回顶部

因为Docker从1.3.X之后,与dockerregistry交互默认使用的是https,然而此处搭建的私有仓库只提供http服务,所以当与私有仓库交互时就会报上面的错误。为了解决这个问题需要在启动docker server时增加启动参数为默认使用http访问。修改docker启动配置文件,/etc/sysconfig/docker在其中增加--insecure-registry 192.168.56.2:5000如下所示

48

[root@master ~]# grep -v "^#" /etc/sysconfig/docker | grep -v "^$"
OPTIONS='--selinux-enabled --log-driver=journald --insecure-registry 192.168.142.8:5000'
DOCKER_CERT_PATH=/etc/docker

重启docker服务并查看docker服务的状态

回顶部

[root@master ~]# systemctl restart docker
[root@master ~]# systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2016-08-25 08:22:53 CST; 4s ago
     Docs: http://docs.docker.com
 Main PID: 5264 (sh)
   Memory: 37.7M
   CGroup: /system.slice/docker.service
           ├─5264 /bin/sh -c /usr/bin/docker-current daemon            --exec-opt native.cgroupdriver=systemd      ...
           ├─5265 /usr/bin/docker-current daemon --exec-opt native.cgroupdriver=systemd --selinux-enabled --log-dri...
           ├─5266 /usr/bin/forward-journald -tag docker
           └─5346 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 5000 -container-ip 172.17.0.2 -container-port...

上传镜像到本地私有仓库

回顶部

[root@master ~]# docker push 192.168.142.8:5000/centos:centos7 
The push refers to a repository [192.168.142.8:5000/centos]
unable to ping registry endpoint https://192.168.142.8:5000/v0/
v2 ping attempt failed with error: Get https://192.168.142.8:5000/v2/: EOF
 v1 ping attempt failed with error: Get https://192.168.142.8:5000/v1/_ping: EOF

估计是没有更改--insecure-registry

更改/etc/sysconfig/docker/

[root@master ~]# docker push 192.168.142.8:5000/centos:centos7 
The push refers to a repository [192.168.142.8:5000/centos]
0fe55794a0f7: Image successfully pushed 
Pushing tag for rev [50dae1ee8677] on {http://192.168.142.8:5000/v1/repositories/centos/tags/centos7}

上传镜像后查看

回顶部

可以看到镜像已经push到私有仓库中去了

[root@master ~]# tree /opt/data/registry/
/opt/data/registry/
├── images
│   └── 50dae1ee86770fdc303c2cac03d3f7f62cd78ba6a8ad27b303447680853152f5
│       ├── ancestry
│       ├── _checksum
│       ├── json
│       └── layer
└── repositories
    └── library
        └── centos
            ├── _index_images
            ├── tag_centos7
            └── tagcentos7_json

5 directories, 7 files
[root@master ~]# curl 192.168.142.8:5000/v1/search
{"num_results": 1, "query": "", "results": [{"description": "", "name": "library/centos"}]}[root@master ~]#

注:到此就搭建好了Docker私有仓库。上面搭建的仓库是不需要认证的,我们可以结合nginxhttps实现认证和加密功能。

node主机配置为registry客户端

修改docker配置文件,指定私有仓库url

52

重启docker服务

回顶部

[root@node01 ~]# grep -v "^$" /etc/sysconfig/docker | grep -v "^#"
OPTIONS='--selinux-enabled --log-driver=journald -b kbr0 --insecure-registry 192.168.142.8:5000'
DOCKER_CERT_PATH=/etc/docker
[root@node01 ~]# systemctl restart docker
[root@node01 ~]# systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2016-08-25 08:29:58 CST; 6s ago

测试,下载刚才上传的镜像

回顶部

[root@node01 ~]# docker pull 192.168.142.8:5000/centos:centos7
Trying to pull repository 192.168.142.8:5000/centos ... 
Pulling repository 192.168.142.8:5000/centos
50dae1ee8677: Pull complete 
Status: Downloaded newer image for 192.168.142.8:5000/centos:centos7 
192.168.142.8:5000/centos: this image was pulled from a legacy registry.  Important: This registry version will not be supported in future versions of docker.
[root@node01 ~]# docker images 
REPOSITORY                              TAG                 IMAGE ID            CREATED             SIZE
docker.io/jasonperma/centos6_http_ssh   latest              2460b63219d7        3 weeks ago         291.9 MB
centos                                  http                b097bfe56b56        4 weeks ago         291.7 MB
192.168.142.8:5000/centos               centos7             964dfdcb6afa        5 weeks ago         196.7 MB
docker.io/centos                        latest              50dae1ee8677        5 weeks ago         196.7 MB
docker.io/centos                        centos6             cf2c3ece5e41        7 weeks ago         194.6 MB
[root@node01 ~]# docker rmi $(docker images -aq)
Untagged: docker.io/jasonperma/centos6_http_ssh:latest
Deleted: sha256:2460b63219d7f6196b344bd2d8156b21bd0472f4232f87227143429890d9ea4e
Deleted: sha256:11f5161e2a4bd98c2bc3d17a3d563b1021cadc52bb4d91d7514bdc8e2d871b7b
Deleted: sha256:9d5d52905668c2996b6f9661caa28e2550f672fceb4d7f6c3930d9af74ecc44b
Deleted: sha256:bed636e2f90808121869c3776ffac052806573f1377c1a371c08d226fe138321
Deleted: sha256:7fd04fe6aee105bbef32e182a0d73e53f58138e6b6ac98a0d193ffe2bb8643d4
Deleted: sha256:b0a136382d9a360e06d9e485f9797ee3bc42992b29d36d8bcd90eee637a5f9dc
Deleted: sha256:09a2301ce7f8bc598e3ea0bd09b9d3542243cf4df3be0c1234b47c2f7e55206a
Deleted: sha256:816a6c36cf786fe20a06192b3d20f74316a4a8dde2746873eddfde358ca4c204
Deleted: sha256:05eeda1cea0e371eec885f5b1e6f226120561f0bd3bf9b33f3598c44949d78ed
Deleted: sha256:a20152cef11953832def36a6a71f2ef977508cb216176c56fe2fb4fb4f0e7be7
Deleted: sha256:30b1b148e8d0469973e1c1ed341da1ffef2aeb7d9a1064ccb427f114e69af48d
Deleted: sha256:262500c07f0e17532b24b58484fbc70546a92606f27b59d957b21bda417a97e3
Deleted: sha256:b9386edf291a0ba2977e5ebd0b88a211cf22b4b3c090a8f59fee2e96079504a5
Deleted: sha256:e0f44ae697bb87a9b9b86c5b6057c12b28bfd86337ba5e1ec8d054be59da306c
Untagged: centos:http
...//省略

node02node01操作一样,私有仓库搭建完成继续kubernetes配置

确保管理节点能够ssh访问所有node节点

配置ssh的秘钥对认证 在master好操作:

56

验证:ssh登陆

回顶部

[root@master ~]# ssh node01
Last login: Thu Aug 25 07:40:34 2016 from 192.168.142.1
[root@node01 ~]# exit
logout
Connection to node01 closed.
[root@master ~]# ssh node02
Last login: Wed Aug 24 22:16:25 2016 from 192.168.142.1
[root@node02 ~]# exit
logout
Connection to node02 closed.

V.部署web应用

回顶部

我们一下面的图来部署一个简单的静态内容的web应用:

首先,我们用复制器启动一个2个备份的apache-pod.然后在前面挂载service,一个service只能被集群内部访问,一个能被集群外的节点访问.下面所有的命令都是在管理节点上面运行的.

准备apache镜像并上传到registry

回顶部

准备images

[root@master ~]# docker images | grep http
docker.io/jasonperma/centos6_http_ssh   latest              2460b63219d7        3 weeks ago         291.9 MB
centos                                  http                b097bfe56b56        4 weeks ago         291.7 MB

给镜像打标签

[root@master ~]# docker tag centos:http 192.168.142.8:5000/centos:apache

上传镜像

[root@master ~]# docker push 192.168.142.8:5000/centos:apache
The push refers to a repository [192.168.142.8:5000/centos]
...//省略
f5235f71ea6c: Image successfully pushed 
c18d01583d9c: Image successfully pushed 
Pushing tag for rev [b097bfe56b56] on {http://192.168.142.8:5000/v1/repositories/centos/tags/apache}

查看上传的镜像

[root@master ~]# tree /opt/data/registry/repositories/
/opt/data/registry/repositories/
└── library
    └── centos
        ├── _index_images
        ├── tag_apache
        ├── tagapache_json
        ├── tag_centos7
        └── tagcentos7_json

2 directories, 5 files

POD镜像上传到私有仓库

[root@master ~]# docker tag registry.access.redhat.com/rhel7/pod-infrastructure:latest 192.168.142.8:5000/pod-infrastructure:latest
[root@master ~]# docker push 192.168.142.8:5000/pod-infrastructure:latest
The push refers to a repository [192.168.142.8:5000/pod-infrastructure]
fdd73c81c68e: Image successfully pushed 
afafa291bfcc: Image successfully pushed 
Pushing tag for rev [ee020ceeef01] on {http://192.168.142.8:5000/v1/repositories/pod-infrastructure/tags/latest}
[root@master ~]# tree /opt/data/registry/repositories/
/opt/data/registry/repositories/
└── library
    ├── centos
    │   ├── _index_images
    │   ├── tag_apache
    │   ├── tagapache_json
    │   ├── tag_centos7
    │   └── tagcentos7_json
    └── pod-infrastructure
        ├── _index_images
        ├── json
        ├── tag_latest
        └── taglatest_json

3 directories, 9 files

更改node01 node02kubelet配置文件

[root@node01 ~]# grep POD /etc/kubernetes/kubelet 
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=192.168.142.8:5000/pod-infrastructure:latest"
[root@node02 ~]# grep POD /etc/kubernetes/kubelet
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=192.168.142.8:5000/pod-infrastructure:latest"

部署apache pod和复制器

回顶部

apache-rc.yaml定义了一个apache pod复制器,复制份数为2,使用的是192.168.56.2:5000/centos:apache镜像

[root@master ~]# cat apache-rc.yaml
apiVersion: v1
kind: ReplicationController
metadata:
  name: apache-controller
spec:
  replicas: 2
  selector:
    name: apache
  template:
    metadata:
      labels:
        name: apache
    spec:
      containers:
        - name: apache
          image: 192.168.142.8:5000/centos:apache
          ports:
            - containerPort: 80

创建apache pod复制器

回顶部

[root@master ~]# kubectl create -f apache-rc.yaml 
replicationcontroller "apache-controller" created

由于kubernetes要去下载apache镜像,所以创建pod时需要等待一些时间才能处理running状态

查看pods状态

回顶部

65

[root@master ~]# kubectl get pod
NAME                      READY     STATUS    RESTARTS   AGE
apache-controller-vowyf   1/1       Running   0          1h
apache-controller-wmgk8   1/1       Running   0          1h
注意:开始的时候是没有在`kubelet`的配置文件中指定`pod`为私有仓库。这里有一个修改`node01 node02 kubelet`配置文件的过程,以及重启两台虚拟机的过程没有记录在文档当中

查看pods详细状态

回顶部

[root@master ~]# kubectl get pod -o wide
NAME                      READY     STATUS    RESTARTS   AGE       NODE
apache-controller-vowyf   1/1       Running   0          1h        192.168.142.18
apache-controller-wmgk8   1/1       Running   0          1h        192.168.142.16

删除pod

回顶部

kubectl delete pod podName

由于设置了两份副本,所以删除pod的时候,k8s会迅速起另外一个一模一样的pod以保持副本数量为2不变。

要彻底删除pod,只能删除创建它的replication controller

查看replication controller

kubectl get rc

删除replication controller

kubectl delete rc rcName

删除rc之后,其创建的pod会一并删除

部署节点内部可访问的apache service

servicetypeClusterIPNodePort之分,缺省是ClusterIp,这种类型的service只能在集群内部访问

Service Type Description
ClusterIp Uses a cluster-internal IP only.
NodePort In addition to a cluster IP exposes the Service on each node of the cluster.
LoadBalancer In addition to exposing the Service on a cluster internal Ip and a port on each node on the cluster, requests the cloud provider to provide a load balancer for the Service. The load balancer balances the load between the Pods in the Service.

配置文件如下:

[root@master ~]# cat apache-clusterip-service.yaml
aipVersion: v1
kind: Service
metadata:
  name: apache-clusterip-service
spec:
  ports:
    - port: 8000
      targetPort: 80
      protocol: TCP
  selector:
    name: apache
[root@master ~]# kubectl create -f apache-clusterip-service.yaml 
error validating "apache-clusterip-service.yaml": error validating data: API version "" isn't supported, only supports API versions ["v1" "authorization.k8s.io/v1beta1" "autoscaling/v1" "batch/v1" "componentconfig/v1alpha1" "extensions/v1beta1" "metrics/v1alpha1"]; if you choose to ignore these errors, turn validation off with --validate=false

api打成了aip,更改

[root@master ~]# cat apache-clusterip-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: apache-clusterip-service
spec:
  ports:
    - port: 8000
      targetPort: 80
      protocol: TCP
  selector:
    name: apache

创建apache-clusterip-service

回顶部

[root@master ~]# kubectl create -f apache-clusterip-service.yaml 
service "apache-clusterip-service" created

查看apache-clusterip-service

回顶部

[root@master ~]# kubectl get service 
NAME                       CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
apache-clusterip-service   10.254.184.198   <none>        8000/TCP   3m
kubernetes                 10.254.0.1       <none>        443/TCP    12h
[root@master ~]# kubectl get service -o wide
NAME                       CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE       SELECTOR
apache-clusterip-service   10.254.184.198   <none>        8000/TCP   4m        name=apache
kubernetes                 10.254.0.1       <none>        443/TCP    12h       <none>

通过查看Service状态可以获取clusterip service10.254.223.193,端口是8000

验证apache-clusterip-service

回顶部

curl -s 10.254.184.198:8000   
选项 作用
-s, --silent Silent mode. Don't output anything

注:在node节点执行

[root@node01 ~]# curl -s 10.254.184.198:5000
[root@node01 ~]# curl 10.254.184.198:5000
curl: (7) Failed connect to 10.254.184.198:5000; Connection refused
[root@node01 ~]# curl 10.254.184.198:8000
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
    <head>
        <title>Apache HTTP Server Test Page powered by CentOS</title>
        <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
...//省略
[root@node02 ~]# curl -s 10.254.184.198:8000
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
    <head>
        <title>Apache HTTP Server Test Page powered by CentOS</title>
        <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
        <style type="text/css">
...//省略
[root@node02 ~]# curl 10.254.184.198:5000
curl: (7) Failed connect to 10.254.184.198:5000; Connection refused

有的时候是一个能访问,有的时候另一个能访问 最后,所有都能访问

[root@node01 ~]# curl 10.254.184.198:8000
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
    <head>
        <title>Apache HTTP Server Test Page powered by CentOS</title>
        <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
...//省略
[root@node02 ~]# curl 10.254.184.198:8000
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
    <head>
        <title>Apache HTTP Server Test Page powered by CentOS</title>
        <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
...//省略

VI.部署外部可访问的apache service

回顶部

创建NodePort类型的Service,这种类型的Service在集群外部是可以访问

[root@master ~]# cat apache-nodeport-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: apache-nodeport-service
spec:
  ports:
    - port: 8001
      targetPort: 80
      protocol: TCP
  type: NodePort
  selector:
    name: apache

创建apache-nodeport-service

回顶部

[root@master ~]# kubectl create -f apache-nodeport-service.yaml 
You have exposed your service on an external port on all nodes in your
cluster.  If you want to expose this service to the external internet, you may
need to set up firewall rules for the service port(s) (tcp:32713) to serve traffic.

See http://releases.k8s.io/release-1.2/docs/user-guide/services-firewalls.md for more details.
service "apache-nodeport-service" created

查看apache-nodeport-service

回顶部

[root@master ~]# kubectl get service
NAME                       CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
apache-clusterip-service   10.254.184.198   <none>        8000/TCP   14m
apache-nodeport-service    10.254.146.138   nodes         8001/TCP   39s
kubernetes                 10.254.0.1       <none>        443/TCP    12h
[root@master ~]# kubectl get service -o wide
NAME                       CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE       SELECTOR
apache-clusterip-service   10.254.184.198   <none>        8000/TCP   14m       name=apache
apache-nodeport-service    10.254.146.138   nodes         8001/TCP   49s       name=apache
kubernetes                 10.254.0.1       <none>        443/TCP    12h       <none>
[root@master ~]# kubectl describe service apache-nodeport-service 
Name:            apache-nodeport-service
Namespace:        default
Labels:            <none>
Selector:        name=apache
Type:            NodePort
IP:                10.254.146.138
Port:            <unset>    8001/TCP
NodePort:        <unset>    32713/TCP
Endpoints:        172.17.1.2:80,172.17.2.2:80
Session Affinity:    None
No events.

nodeport: <unset> 32713/TCPservice节点级别端口是32713

验证apache-nodeport-service可访问性

回顶部

curl 192.168.142.16:32713 或 curl 192.168.142.18:32713
[root@master ~]# curl 192.168.142.16:32713
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
    <head>
        <title>Apache HTTP Server Test Page powered by CentOS</title>
        <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />

...//省略
[root@master ~]# curl 192.168.142.18:32713
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
    <head>
        <title>Apache HTTP Server Test Page powered by CentOS</title>
        <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
...//省略
[root@node01 ~]# curl 192.168.142.16:32713
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
    <head>
        <title>Apache HTTP Server Test Page powered by CentOS</title>
        <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
...//省略
[root@node01 ~]# curl 192.168.142.18:32713
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
    <head>
        <title>Apache HTTP Server Test Page powered by CentOS</title>
        <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
...//省略
[root@node02 ~]# curl 192.168.142.16:32713
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
    <head>
        <title>Apache HTTP Server Test Page powered by CentOS</title>
        <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
...//省略
[root@node02 ~]# curl 192.168.142.18:32713
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
    <head>
        <title>Apache HTTP Server Test Page powered by CentOS</title>
        <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
...//省略

回顶部


results matching ""

    No results matching ""