кластерная информация kubectl получает 502 ошибки Недопустимого шлюза

Я использовал juju deploy canonical-kubernetes развернуть K8S. Но, когда выполнено ./kubectl cluster-info как Каноническое Распределение сказанного документа очарования Kubernetes, доберитесь ниже ошибки:

Error from server: an error on the server ("<html>\r\n<head><title>502
Bad Gateway</title></head>\r\n<body bgcolor=\"white\">\r\n<center>
<h1>502           Bad Gateway</h1></center>\r\n<hr><center>nginx/1.10.0
 (Ubuntu)</center>\r\n</body>\r\n</html>") has prevented the request from succeeding

Состояние Juju произвело:

MODEL    CONTROLLER  CLOUD/REGION         VERSION
default  lxd-test    localhost/localhost  2.0-rc3

APP                    VERSION  STATUS       SCALE  CHARM                  STORE       REV  OS      NOTES
easyrsa                3.0.1    active           1  easyrsa                jujucharms    2  ubuntu  
elasticsearch                   active           2  elasticsearch          jujucharms   19  ubuntu  
etcd                   2.2.5    active           3  etcd                   jujucharms   13  ubuntu  
filebeat                        active           4  filebeat               jujucharms    5  ubuntu  
flannel                0.6.1    waiting          4  flannel                jujucharms    3  ubuntu  
kibana                          active           1  kibana                 jujucharms   15  ubuntu  
kubeapi-load-balancer  1.10.0   active           1  kubeapi-load-balancer  jujucharms    2  ubuntu  exposed
kubernetes-master      1.4.0    maintenance      1  kubernetes-master      jujucharms    3  ubuntu  
kubernetes-worker      1.4.0    waiting          3  kubernetes-worker      jujucharms    3  ubuntu  exposed
topbeat                         active           3  topbeat                jujucharms    5  ubuntu  

UNIT                      WORKLOAD     AGENT      MACHINE  PUBLIC-ADDRESS  PORTS            MESSAGE
easyrsa/0*                active       idle       0        10.181.160.79                    Certificate Authority connected.
elasticsearch/0*          active       idle       1        10.181.160.62   9200/tcp         Ready
elasticsearch/1           active       idle       2        10.181.160.72   9200/tcp         Ready
etcd/0*                   active       idle       3        10.181.160.41   2379/tcp         Healthy with 3 known peers. (leader)
etcd/1                    active       idle       4        10.181.160.135  2379/tcp         Healthy with 3 known peers.
etcd/2                    active       idle       5        10.181.160.204  2379/tcp         Healthy with 3 known peers.
kibana/0*                 active       idle       6        10.181.160.54   80/tcp,9200/tcp  ready
kubeapi-load-balancer/0*  active       idle       7        10.181.160.42   443/tcp          Loadbalancer ready.
kubernetes-master/0*      maintenance  idle       8        10.181.160.208                   Rendering authentication templates.
  filebeat/0              active       idle                10.181.160.208                   Filebeat ready.
  flannel/0*              waiting      idle                10.181.160.208                   Flannel is starting up.
kubernetes-worker/0*      waiting      idle       9        10.181.160.94                    Waiting for cluster-manager to initiate start.
  filebeat/1*             active       idle                10.181.160.94                    Filebeat ready.
  flannel/1               waiting      idle                10.181.160.94                    Flannel is starting up.
  topbeat/0               active       idle                10.181.160.94                    Topbeat ready.
kubernetes-worker/1       waiting      idle       10       10.181.160.95                    Waiting for cluster-manager to initiate start.
  filebeat/2              active       idle                10.181.160.95                    Filebeat ready.
  flannel/2               waiting      idle                10.181.160.95                    Flannel is starting up.
  topbeat/1*              active       executing           10.181.160.95                    (update-status) Topbeat ready.
kubernetes-worker/2       waiting      idle       11       10.181.160.148                   Waiting for cluster-manager to initiate start.
  filebeat/3              active       idle                10.181.160.148                   Filebeat ready.
  flannel/3               waiting      idle                10.181.160.148                   Flannel is starting up.
  topbeat/2               active       idle                10.181.160.148                   Topbeat ready.

MACHINE  STATE    DNS             INS-ID          SERIES  AZ
0        started  10.181.160.79   juju-23ce86-0   xenial  
1        started  10.181.160.62   juju-23ce86-1   trusty  
2        started  10.181.160.72   juju-23ce86-2   trusty  
3        started  10.181.160.41   juju-23ce86-3   xenial  
4        started  10.181.160.135  juju-23ce86-4   xenial  
5        started  10.181.160.204  juju-23ce86-5   xenial  
6        started  10.181.160.54   juju-23ce86-6   trusty  
7        started  10.181.160.42   juju-23ce86-7   xenial  
8        started  10.181.160.208  juju-23ce86-8   xenial  
9        started  10.181.160.94   juju-23ce86-9   xenial  
10       started  10.181.160.95   juju-23ce86-10  xenial  
11       started  10.181.160.148  juju-23ce86-11  xenial  

RELATION           PROVIDES               CONSUMES               TYPE
certificates       easyrsa                kubeapi-load-balancer  regular
certificates       easyrsa                kubernetes-master      regular
certificates       easyrsa                kubernetes-worker      regular
peer               elasticsearch          elasticsearch          peer
elasticsearch      elasticsearch          filebeat               regular
rest               elasticsearch          kibana                 regular
elasticsearch      elasticsearch          topbeat                regular
cluster            etcd                   etcd                   peer
etcd               etcd                   flannel                regular
etcd               etcd                   kubernetes-master      regular
juju-info          filebeat               kubernetes-master      regular
juju-info          filebeat               kubernetes-worker      regular
sdn-plugin         flannel                kubernetes-master      regular
sdn-plugin         flannel                kubernetes-worker      regular
loadbalancer       kubeapi-load-balancer  kubernetes-master      regular
kube-api-endpoint  kubeapi-load-balancer  kubernetes-worker      regular
beats-host         kubernetes-master      filebeat               subordinate
host               kubernetes-master      flannel                subordinate
kube-dns           kubernetes-master      kubernetes-worker      regular
beats-host         kubernetes-worker      filebeat               subordinate
host               kubernetes-worker      flannel                subordinate
beats-host         kubernetes-worker      topbeat                subordinate
5
задан 13 October 2016 в 05:04

1 ответ

Это, кажется, потому что Вы развертываете Kubernetes на LXD. Согласно README для Канонического Kubernetes:

kubernetes-ведущее-устройство, kubernetes-рабочий, kubeapi-подсистема-балансировки-нагрузки и etcd не поддерживаются на LXD в это время.

Это - ограничение между Докером и LXD - один мы надеемся скоро отсортировать. Тем временем те компоненты должны быть выполнены, по крайней мере, на VM.

можно сделать это вручную с LXD, где Вы развертываете остальную часть компонентов на LXD и затем вручную запускаете несколько экземпляров KVM на Вашем компьютере.

я попытаюсь получить чистый набор инструкций для этого и ответить здесь ими.

2
ответ дан 23 November 2019 в 10:35

Другие вопросы по тегам:

Похожие вопросы: