2017-02-26 15 views
8

私はCoreOS alpha(1325.1.0)によるContainer Linuxを自宅のPCにインストールしています。hyperkubeは/ etc/kubernetes/manifestsからマニフェストを起動しません

私は数ヶ月間kubernetesで遊んでいましたが、今はContainerOSを再インストールして、https://github.com/kfirufk/coreos-kubernetesに私のフォークを使ってkubernetesをインストールしようとすると、kubernetesを正しくインストールできません。

hyperkube画像v1.6.0-beta.0_coreos.0を使用します。

問題は、hyperkubeが/etc/kubernetes/manifestsからマニフェストを開始しようとしていないようです。私はkubeletをrktで走らせるように設定しました。

私はkubeletの再起動後journalctl -xef -u kubeletを実行したときに、私は次のような出力が得られます。

Feb 26 20:17:33 coreos-2.tux-in.com kubelet-wrapper[3673]: + exec /usr/bin/rkt run --uuid-file-save=/var/run/kubelet-pod.uuid --volume dns,kind=host,source=/run/systemd/resolve/resolv.conf --mount volume=dns,target=/etc/resolv.conf --volume rkt,kind=host,source=/opt/bin/host-rkt --mount volume=rkt,target=/usr/bin/rkt --volume var-lib-rkt,kind=host,source=/var/lib/rkt --mount volume=var-lib-rkt,target=/var/lib/rkt --volume stage,kind=host,source=/tmp --mount volume=stage,target=/tmp --volume var-log,kind=host,source=/var/log --mount volume=var-log,target=/var/log --volume cni-bin,kind=host,source=/opt/cni/bin --mount volume=cni-bin,target=/opt/cni/bin --trust-keys-from-https --volume etc-kubernetes,kind=host,source=/etc/kubernetes,readOnly=false --volume etc-ssl-certs,kind=host,source=/etc/ssl/certs,readOnly=true --volume usr-share-certs,kind=host,source=/usr/share/ca-certificates,readOnly=true --volume var-lib-docker,kind=host,source=/var/lib/docker,readOnly=false --volume var-lib-kubelet,kind=host,source=/var/lib/kubelet,readOnly=false --volume os-release,kind=host,source=/usr/lib/os-release,readOnly=true --volume run,kind=host,source=/run,readOnly=false --mount volume=etc-kubernetes,target=/etc/kubernetes --mount volume=etc-ssl-certs,target=/etc/ssl/certs --mount volume=usr-share-certs,target=/usr/share/ca-certificates --mount volume=var-lib-docker,target=/var/lib/docker --mount volume=var-lib-kubelet,target=/var/lib/kubelet --mount volume=os-release,target=/etc/os-release --mount volume=run,target=/run --stage1-from-dir=stage1-fly.aci quay.io/coreos/hyperkube:v1.6.0-beta.0_coreos.0 --exec=/kubelet -- --require-kubeconfig --kubeconfig=/etc/kubernetes/controller-kubeconfig.yaml --register-schedulable=true --cni-conf-dir=/etc/kubernetes/cni/net.d --network-plugin=kubenet --container-runtime=rkt --rkt-path=/usr/bin/rkt --allow-privileged=true --pod-manifest-path=/etc/kubernetes/manifests --hostname-override=192.168.1.2 --cluster_dns=10.3.0.10 --cluster_domain=cluster.local 
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: Flag --register-schedulable has been deprecated, will be removed in a future version 
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:41.260305 3673 feature_gate.go:170] feature gates: map[] 
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:41.332539 3673 manager.go:143] cAdvisor running in container: "/system.slice/kubelet.service" 
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:41.355270 3673 fs.go:117] Filesystem partitions: map[/dev/mapper/usr:{mountpoint:/usr/lib/os-release major:254 minor:0 fsType:ext4 blockSize:0} /dev/sda9:{mountpoint:/var/lib/docker major:8 minor:9 fsType:ext4 blockSize:0} /dev/sdb1:{mountpoint:/var/lib/rkt major:8 minor:17 fsType:ext4 blockSize:0}] 
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:41.359173 3673 manager.go:198] Machine: {NumCores:8 CpuFrequency:3060000 MemoryCapacity:4145344512 MachineID:b07a180a2c8547f7956e9a6f93a452a4 SystemUUID:00000000-0000-0000-0000-1C6F653E6F72 BootID:c03de69b-c9c8-4fb7-a3df-de4f70a74218 Filesystems:[{Device:/dev/mapper/usr Capacity:1031946240 Type:vfs Inodes:260096 HasInodes:true} {Device:/dev/sda9 Capacity:113819422720 Type:vfs Inodes:28536576 HasInodes:true} {Device:/dev/sdb1 Capacity:984373800960 Type:vfs Inodes:61054976 HasInodes:true} {Device:overlay Capacity:984373800960 Type:vfs Inodes:61054976 HasInodes:true}] DiskMap:map[254:0:{Name:dm-0 Major:254 Minor:0 Size:1065345024 Scheduler:none} 8:0:{Name:sda Major:8 Minor:0 Size:120034123776 Scheduler:cfq} 8:16:{Name:sdb Major:8 Minor:16 Size:1000204886016 Scheduler:cfq} 8:32:{Name:sdc Major:8 Minor:32 Size:3000592982016 Scheduler:cfq} 8:48:{Name:sdd Major:8 Minor:48 Size:2000398934016 Scheduler:cfq} 8:64:{Name:sde Major:8 Minor:64 Size:1000204886016 Scheduler:cfq}] NetworkDevices:[{Name:enp3s0 MacAddress:1c:6f:65:3e:6f:72 Speed:1000 Mtu:1500} {Name:flannel.1 MacAddress:be:f8:31:12:15:f5 Speed:0 Mtu:1450}] Topology:[{Id:0 Memory:4145344512 Cores:[{Id:0 Threads:[0 4] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:1 Threads:[1 5] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:2 Threads:[2 6] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:3 Threads:[3 7] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]}] Caches:[{Size:8388608 Type:Unified Level:3}]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} 
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:41.359768 3673 manager.go:204] Version: {KernelVersion:4.9.9-coreos-r1 ContainerOsVersion:Container Linux by CoreOS 1325.1.0 (Ladybug) DockerVersion:1.13.1 CadvisorVersion: CadvisorRevision:} 
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:41.362754 3673 kubelet.go:253] Adding manifest file: /etc/kubernetes/manifests 
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:41.362800 3673 kubelet.go:263] Watching apiserver 
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: W0226 20:17:41.366369 3673 kubelet_network.go:63] Hairpin mode set to "promiscuous-bridge" but container runtime is "rkt", ignoring 
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:41.366427 3673 kubelet.go:494] Hairpin mode set to "none" 
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:41.379778 3673 server.go:790] Started kubelet v1.6.0-beta.0+coreos.0 
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:41.379803 3673 kubelet.go:1143] Image garbage collection failed: unable to find data for container/
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:41.379876 3673 server.go:125] Starting to listen on 0.0.0.0:10250 
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:41.380252 3673 kubelet_node_status.go:238] Setting node annotation to enable volume controller attach/detach 
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:41.381083 3673 kubelet.go:1631] Failed to check if disk space is available for the runtime: failed to get fs info for "runtime": unable to find data for container/
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:41.381120 3673 kubelet.go:1639] Failed to check if disk space is available on the root partition: failed to get fs info for "root": unable to find data for container/
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:41.381658 3673 server.go:288] Adding debug handlers to kubelet server. 
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:41.382281 3673 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer 
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:41.382310 3673 status_manager.go:140] Starting to sync pod status with apiserver 
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:41.382326 3673 kubelet.go:1711] Starting kubelet main sync loop. 
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:41.382354 3673 kubelet.go:1722] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s] 
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:41.382616 3673 volume_manager.go:248] Starting Kubelet Volume Manager 
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:41.386643 3673 kubelet.go:2028] Container runtime status is nil 
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:41.436430 3673 event.go:208] Unable to write event: 'Post https://coreos-2.tux-in.com:443/api/v1/namespaces/default/events: dial tcp 192.168.1.2:443: getsockopt: connection refused' (may retry after sleeping) 
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:41.436547 3673 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://coreos-2.tux-in.com:443/api/v1/pods?fieldSelector=spec.nodeName%3D192.168.1.2&resourceVersion=0: dial tcp 192.168.1.2:443: getsockopt: connection refused 
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:41.436547 3673 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:380: Failed to list *v1.Service: Get https://coreos-2.tux-in.com:443/api/v1/services?resourceVersion=0: dial tcp 192.168.1.2:443: getsockopt: connection refused 
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:41.436557 3673 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:388: Failed to list *v1.Node: Get https://coreos-2.tux-in.com:443/api/v1/nodes?fieldSelector=metadata.name%3D192.168.1.2&resourceVersion=0: dial tcp 192.168.1.2:443: getsockopt: connection refused 
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:41.482996 3673 kubelet_node_status.go:238] Setting node annotation to enable volume controller attach/detach 
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:41.483717 3673 kubelet.go:1631] Failed to check if disk space is available for the runtime: failed to get fs info for "runtime": unable to find data for container/
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:41.483774 3673 kubelet.go:1639] Failed to check if disk space is available on the root partition: failed to get fs info for "root": unable to find data for container/
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:41.483907 3673 kubelet_node_status.go:78] Attempting to register node 192.168.1.2 
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:41.556064 3673 kubelet_node_status.go:102] Unable to register node "192.168.1.2" with API server: Post https://coreos-2.tux-in.com:443/api/v1/nodes: dial tcp 192.168.1.2:443: getsockopt: connection refused 
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:41.756398 3673 kubelet_node_status.go:238] Setting node annotation to enable volume controller attach/detach 
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:41.757047 3673 kubelet.go:1631] Failed to check if disk space is available for the runtime: failed to get fs info for "runtime": unable to find data for container/
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:41.757087 3673 kubelet.go:1639] Failed to check if disk space is available on the root partition: failed to get fs info for "root": unable to find data for container/
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:41.757152 3673 kubelet_node_status.go:78] Attempting to register node 192.168.1.2 
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:41.833244 3673 kubelet_node_status.go:102] Unable to register node "192.168.1.2" with API server: Post https://coreos-2.tux-in.com:443/api/v1/nodes: dial tcp 192.168.1.2:443: getsockopt: connection refused 
Feb 26 20:17:42 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:42.233574 3673 kubelet_node_status.go:238] Setting node annotation to enable volume controller attach/detach 
Feb 26 20:17:42 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:42.234232 3673 kubelet.go:1631] Failed to check if disk space is available for the runtime: failed to get fs info for "runtime": unable to find data for container/
Feb 26 20:17:42 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:42.234266 3673 kubelet.go:1639] Failed to check if disk space is available on the root partition: failed to get fs info for "root": unable to find data for container/
Feb 26 20:17:42 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:42.234324 3673 kubelet_node_status.go:78] Attempting to register node 192.168.1.2 
Feb 26 20:17:42 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:42.306213 3673 kubelet_node_status.go:102] Unable to register node "192.168.1.2" with API server: Post https://coreos-2.tux-in.com:443/api/v1/nodes: dial tcp 192.168.1.2:443: getsockopt: connection refused 
Feb 26 20:17:42 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:42.512768 3673 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:388: Failed to list *v1.Node: Get https://coreos-2.tux-in.com:443/api/v1/nodes?fieldSelector=metadata.name%3D192.168.1.2&resourceVersion=0: dial tcp 192.168.1.2:443: getsockopt: connection refused 
Feb 26 20:17:42 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:42.512810 3673 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://coreos-2.tux-in.com:443/api/v1/pods?fieldSelector=spec.nodeName%3D192.168.1.2&resourceVersion=0: dial tcp 192.168.1.2:443: getsockopt: connection refused 
Feb 26 20:17:42 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:42.512905 3673 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:380: Failed to list *v1.Service: Get https://coreos-2.tux-in.com:443/api/v1/services?resourceVersion=0: dial tcp 192.168.1.2:443: getsockopt: connection refused 
Feb 26 20:17:43 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:43.106559 3673 kubelet_node_status.go:238] Setting node annotation to enable volume controller attach/detach 
Feb 26 20:17:43 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:43.107210 3673 kubelet.go:1631] Failed to check if disk space is available for the runtime: failed to get fs info for "runtime": unable to find data for container/
Feb 26 20:17:43 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:43.107244 3673 kubelet.go:1639] Failed to check if disk space is available on the root partition: failed to get fs info for "root": unable to find data for container/
Feb 26 20:17:43 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:43.107304 3673 kubelet_node_status.go:78] Attempting to register node 192.168.1.2 
Feb 26 20:17:43 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:43.186848 3673 kubelet_node_status.go:102] Unable to register node "192.168.1.2" with API server: Post https://coreos-2.tux-in.com:443/api/v1/nodes: dial tcp 192.168.1.2:443: getsockopt: connection refused 
Feb 26 20:17:43 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:43.580259 3673 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:380: Failed to list *v1.Service: Get https://coreos-2.tux-in.com:443/api/v1/services?resourceVersion=0: dial tcp 192.168.1.2:443: getsockopt: connection refused 
Feb 26 20:17:43 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:43.580286 3673 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:388: Failed to list *v1.Node: Get https://coreos-2.tux-in.com:443/api/v1/nodes?fieldSelector=metadata.name%3D192.168.1.2&resourceVersion=0: dial tcp 192.168.1.2:443: getsockopt: connection refused 
Feb 26 20:17:43 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:43.580285 3673 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://coreos-2.tux-in.com:443/api/v1/pods?fieldSelector=spec.nodeName%3D192.168.1.2&resourceVersion=0: dial tcp 192.168.1.2:443: getsockopt: connection refused 

私kubelet.serviceコンテンツが(私は--network-プラグイン= kubenetとCNIと試みたが、違いはありません:

[Service] 
Environment=KUBELET_IMAGE_TAG=v1.6.0-beta.0_coreos.0 
Environment=KUBELET_IMAGE_URL=quay.io/coreos/hyperkube 
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/run/kubelet-pod.uuid \ 
    --volume dns,kind=host,source=/run/systemd/resolve/resolv.conf \ 
    --mount volume=dns,target=/etc/resolv.conf \ 
    --volume rkt,kind=host,source=/opt/bin/host-rkt \ 
    --mount volume=rkt,target=/usr/bin/rkt \ 
    --volume var-lib-rkt,kind=host,source=/var/lib/rkt \ 
    --mount volume=var-lib-rkt,target=/var/lib/rkt \ 
    --volume stage,kind=host,source=/tmp \ 
    --mount volume=stage,target=/tmp \ 
    --volume var-log,kind=host,source=/var/log \ 
    --mount volume=var-log,target=/var/log \ 
    --volume cni-bin,kind=host,source=/opt/cni/bin     --mount volume=cni-bin,target=/opt/cni/bin" 
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests 
ExecStartPre=/usr/bin/mkdir -p /opt/cni/bin 
ExecStartPre=/usr/bin/mkdir -p /var/log/containers 
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/kubelet-pod.uuid 
ExecStart=/usr/lib/coreos/kubelet-wrapper \ 
    --require-kubeconfig \ 
    --kubeconfig=/etc/kubernetes/controller-kubeconfig.yaml \ 
    --register-schedulable=true \ 
    --cni-conf-dir=/etc/kubernetes/cni/net.d \ 
    --network-plugin=kubenet \ 
    --container-runtime=rkt \ 
    --rkt-path=/usr/bin/rkt \ 
    --allow-privileged=true \ 
    --pod-manifest-path=/etc/kubernetes/manifests \ 
    --hostname-override=192.168.1.2 \ 
    --cluster_dns=10.3.0.10 \ 
    --cluster_domain=cluster.local 
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet-pod.uuid 
Restart=always 
RestartSec=10 

[Install] 
WantedBy=multi-user.target 

/var/lib/coreos-install/user_dataファイル:

#cloud-config 

hostname: "coreos-2.tux-in.com" 
write_files: 
- path: "/etc/ssh/sshd_config" 
    permissions: 0600 
    owner: root:root 
    content: | 
    # Use most defaults for sshd configuration. 
    UsePrivilegeSeparation sandbox 
    Subsystem sftp internal-sftp 
    ClientAliveInterval 180 
    UseDNS no 
    UsePAM no 
    PrintLastLog no # handled by PAM 
    PrintMotd no # handled by PAMa 
    PasswordAuthentication no 
- path: "/etc/kubernetes/ssl/ca.pem" 
    permissions: "0666" 
    content: | 
    XXXX 
- path: "/etc/kubernetes/ssl/apiserver.pem" 
    permissions: "0666" 
    content: | 
    XXXX 
- path: "/etc/kubernetes/ssl/apiserver-key.pem" 
    permissions: "0666" 
    content: | 
    XXXX 
- path: "/etc/ssl/etcd/ca.pem" 
    permissions: "0666" 
    owner: "etcd:etcd" 
    content: | 
    XXXX 
- path: "/etc/ssl/etcd/etcd1.pem" 
    permissions: "0666" 
    owner: "etcd:etcd" 
    content: | 
    XXXX 
- path: "/etc/ssl/etcd/etcd1-key.pem" 
    permissions: "0666" 
    owner: "etcd:etcd" 
    content: | 
    XXXX 
ssh_authorized_keys: 
     - "XXXX [email protected]" 
users: 
    - name: "ufk" 
    passwd: "XXXX" 
    groups: 
     - "sudo" 
    ssh-authorized-keys: 
     - "ssh-rsa XXXX [email protected]" 
coreos: 
    etcd2: 
    # generate a new token for each unique cluster from https://discovery.etcd.io/new?size=3 
    # specify the initial size of your cluster with ?size=X 
    discovery: https://discovery.etcd.io/XXXX 
    advertise-client-urls: https://coreos-2.tux-in.com:2379 
    initial-advertise-peer-urls: https://coreos-2.tux-in.com:2380 
    # listen on both the official ports and the legacy ports 
    # legacy ports can be omitted if your application doesn't depend on them 
    listen-client-urls: https://0.0.0.0:2379,http://127.0.0.1:4001 
    listen-peer-urls: https://coreos-2.tux-in.com:2380 
    locksmith: 
    endpoint: "http://127.0.0.1:4001" 
    update: 
    reboot-strategy: etcd-lock 
    units: 
    - name: 00-enp3s0.network 
     runtime: true 
     content: | 
     [Match] 
     Name=enp3s0 

     [Network] 
     Address=192.168.1.2/16 
     Gateway=192.168.1.1 
     DNS=8.8.8.8 
    - name: mnt-storage.mount 
     enable: true 
     command: start 
     content: | 
     [Mount] 
     What=/dev/disk/by-uuid/e9df7e62-58da-4db2-8616-8947ac835e2c 
     Where=/mnt/storage 
     Type=btrfs 
     Options=loop,discard 
    - name: var-lib-rkt.mount 
     enable: true 
     command: start 
     content: | 
     [Mount] 
     What=/dev/sdb1 
     Where=/var/lib/rkt 
     Type=ext4 
    - name: etcd2.service 
     command: start 
     drop-ins: 
     - name: 30-certs.conf 
     content: | 
      [Service] 
      Restart=always 
      Environment="ETCD_CERT_FILE=/etc/ssl/etcd/etcd1.pem" 
      Environment="ETCD_KEY_FILE=/etc/ssl/etcd/etcd1-key.pem" 
      Environment="ETCD_TRUSTED_CA_FILE=/etc/ssl/etcd/ca.pem" 
      Environment="ETCD_CLIENT_CERT_AUTH=true" 
      Environment="ETCD_PEER_CERT_FILE=/etc/ssl/etcd/etcd1.pem" 
      Environment="ETCD_PEER_KEY_FILE=/etc/ssl/etcd/etcd1-key.pem" 
      Environment="ETCD_PEER_TRUSTED_CA_FILE=/etc/ssl/etcd/ca.pem" 
      Environment="ETCD_PEER_CLIENT_CERT_AUTH=true" 

welp ..私はかなり失われています。この種の問題が私に起こるのは初めてです。この問題に関する情報は非常に高く評価されます。

ただいまの場合、これらは実行されていない/etc/kubernetes/manifestsのマニフェストです。 rkt list --fullには、通常のハイパーキューブ以外のどのタイプのポッドも開始していないことが示されています。

KUBE-apiserver.yaml:

apiVersion: v1 
kind: Pod 
metadata: 
    name: kube-apiserver 
    namespace: kube-system 
spec: 
    hostNetwork: true 
    containers: 
    - name: kube-apiserver 
    image: quay.io/coreos/hyperkube:v1.6.0-beta.0_coreos.0 
    command: 
    - /hyperkube 
    - apiserver 
    - --bind-address=0.0.0.0 
    - --etcd-servers=http://127.0.0.1:4001 
    - --allow-privileged=true 
    - --service-cluster-ip-range=10.3.0.0/24 
    - --secure-port=443 
    - --advertise-address=192.168.1.2 
    - --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota 
    - --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem 
    - --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem 
    - --client-ca-file=/etc/kubernetes/ssl/ca.pem 
    - --service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem 
    - --runtime-config=extensions/v1beta1/networkpolicies=true,batch/v2alpha1=true 
    - --anonymous-auth=false 
    livenessProbe: 
     httpGet: 
     host: 127.0.0.1 
     port: 8080 
     path: /healthz 
     initialDelaySeconds: 15 
     timeoutSeconds: 15 
    ports: 
    - containerPort: 443 
     hostPort: 443 
     name: https 
    - containerPort: 8080 
     hostPort: 8080 
     name: local 
    volumeMounts: 
    - mountPath: /etc/kubernetes/ssl 
     name: ssl-certs-kubernetes 
     readOnly: true 
    - mountPath: /etc/ssl/certs 
     name: ssl-certs-host 
     readOnly: true 
    volumes: 
    - hostPath: 
     path: /etc/kubernetes/ssl 
    name: ssl-certs-kubernetes 
    - hostPath: 
     path: /usr/share/ca-certificates 
    name: ssl-certs-host 

KUBEコントローラ-manager.yaml:

apiVersion: v1 
kind: Pod 
metadata: 
    name: kube-controller-manager 
    namespace: kube-system 
spec: 
    containers: 
    - name: kube-controller-manager 
    image: quay.io/coreos/hyperkube:v1.6.0-beta.0_coreos.0 
    command: 
    - /hyperkube 
    - controller-manager 
    - --master=http://127.0.0.1:8080 
    - --leader-elect=true 
    - --service-account-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem 
    - --root-ca-file=/etc/kubernetes/ssl/ca.pem 
    resources: 
     requests: 
     cpu: 200m 
    livenessProbe: 
     httpGet: 
     host: 127.0.0.1 
     path: /healthz 
     port: 10252 
     initialDelaySeconds: 15 
     timeoutSeconds: 15 
    volumeMounts: 
    - mountPath: /etc/kubernetes/ssl 
     name: ssl-certs-kubernetes 
     readOnly: true 
    - mountPath: /etc/ssl/certs 
     name: ssl-certs-host 
     readOnly: true 
    hostNetwork: true 
    volumes: 
    - hostPath: 
     path: /etc/kubernetes/ssl 
    name: ssl-certs-kubernetes 
    - hostPath: 
     path: /usr/share/ca-certificates 
    name: ssl-certs-host 

KUBE-proxy.yaml:

apiVersion: v1 
kind: Pod 
metadata: 
    name: kube-proxy 
    namespace: kube-system 
    annotations: 
    rkt.alpha.kubernetes.io/stage1-name-override: coreos.com/rkt/stage1-fly 
spec: 
    hostNetwork: true 
    containers: 
    - name: kube-proxy 
    image: quay.io/coreos/hyperkube:v1.6.0-beta.0_coreos.0 
    command: 
    - /hyperkube 
    - proxy 
    - --master=http://127.0.0.1:8080 
    - --cluster-cidr=10.2.0.0/16 
    securityContext: 
     privileged: true 
    volumeMounts: 
    - mountPath: /etc/ssl/certs 
     name: ssl-certs-host 
     readOnly: true 
    - mountPath: /var/run/dbus 
     name: dbus 
     readOnly: false 
    volumes: 
    - hostPath: 
     path: /usr/share/ca-certificates 
    name: ssl-certs-host 
    - hostPath: 
     path: /var/run/dbus 
    name: dbus 

KUBE-scheduler.yaml :

apiVersion: v1 
kind: Pod 
metadata: 
    name: kube-scheduler 
    namespace: kube-system 
spec: 
    hostNetwork: true 
    containers: 
    - name: kube-scheduler 
    image: quay.io/coreos/hyperkube:v1.6.0-beta.0_coreos.0 
    command: 
    - /hyperkube 
    - scheduler 
    - --master=http://127.0.0.1:8080 
    - --leader-elect=true 
    resources: 
     requests: 
     cpu: 100m 
    livenessProbe: 
     httpGet: 
     host: 127.0.0.1 
     path: /healthz 
     port: 10251 
     initialDelaySeconds: 15 
     timeoutSeconds: 15 
+0

私があなたの場合は、 'コンテナのランタイムがダウンしているPLEGは健康ではありません'を調べます。 –

+0

@AntoineCotten - rkt api-serviceが正しく動作していないのではないのですか?どうすればrkt api-serviceが正常に機能することを確認することができますか?私はgoosプログラムを見つけただけで、coreosのツールボックスで正しくコンパイルできませんでした。 – ufk

+1

rktでの私の経験は本当に基本的なものです。あなたのkubeletが始まり、rktコンテナ自体の内部で実行されていることがわかっている限り、このコンテナにはrktと対話する権限がありません。私は、1)k8s v1.5で試してみる - 2)ラベラーの外でkubeletを実行しようとしている –

答えて

4

@AntoineCottenのおかげで、問題は簡単に解決されました。

最初に、hyperkubeをv1.6.0-beta.0_coreos.0からv1.5.3_coreos.0にダウングレードしました。 kubeletのログにエラーがあることに気がついたので、私は/opt/bin/host-rktに大きな誤植があることを理解しました。

exec nsenter -m -u -i -n -p -t 1 -- /usr/bin/rkt "[email protected]"の代わりにexec nsenter -m -u -i -n -p -t 1 -- /usr/bin/rkt "\[email protected]"がありました。

コマンドライン引数を貼り付けようとしたときに$をエスケープしましたが、そうではありませんでした。だから.. 1.6.0-beta0を今使っていない、それは大丈夫です!スクリプトを修正しました。今はすべてが再び動作します。ありがとう

関連する問題