Я развернул Kubernetes V1.18.2
(CDK), используя призыв (который использовал бионический)
coredns
разрешается через / etc /resolv.conf
, см. configmap
ниже:
Name: coredns
Namespace: kube-system
Labels: cdk-addons=true
Annotations:
Data
====
Corefile:
----
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
Events: <none>
Здесь есть известная проблема по адресу https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging -resolution / # known-issues о /etc/resolv.conf
вместо /run/systemd/resolve/resolv.conf
Я редактировал coredns
config, чтобы указать его на /run/systemd/resolve/resolv.conf
, но настройки возвращаются.
Я также попытался установить kubelet-extra-config
на {resolvConf: /run/systemd/resolve/resolv.conf}
, перезапустил сервер, без изменений:
kubelet-extra-config:
default: '{}'
description: |
Extra configuration to be passed to kubelet. Any values specified in this
config will be merged into a KubeletConfiguration file that is passed to
the kubelet service via the --config flag. This can be used to override
values provided by the charm.
Requires Kubernetes 1.10+.
The value for this config must be a YAML mapping that can be safely
merged with a KubeletConfiguration file. For example:
{evictionHard: {memory.available: 200Mi}}
For more information about KubeletConfiguration, see upstream docs:
https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/
source: user
type: string
value: '{resolvConf: /run/systemd/resolve/resolv.conf}'
Но я вижу изменения в конфигурации kubelet
при проверке конфигурации согласно https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/
...
"resolvConf": "/run/systemd/resolve/resolv.conf",
...
Это ошибка в модуле coredns:
E0429 09:16:42.172959 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.152.183.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.152.183.1:443: i/o timeout
[INFO] plugin/ready: Still waiting on: "kubernetes"
См. службу kubernetes:
default kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 4h42m <none>
Вот развертывание coredns
:
Name: coredns
Namespace: kube-system
CreationTimestamp: Wed, 29 Apr 2020 09:15:07 +0000
Labels: cdk-addons=true
cdk-restart-on-ca-change=true
k8s-app=kube-dns
kubernetes.io/name=CoreDNS
Annotations: deployment.kubernetes.io/revision: 1
Selector: k8s-app=kube-dns
Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 25% max surge
Pod Template:
Labels: k8s-app=kube-dns
Service Account: coredns
Containers:
coredns:
Image: rocks.canonical.com:443/cdk/coredns/coredns-amd64:1.6.7
Ports: 53/UDP, 53/TCP, 9153/TCP
Host Ports: 0/UDP, 0/TCP, 0/TCP
Args:
-conf
/etc/coredns/Corefile
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/etc/coredns from config-volume (ro)
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: coredns
Optional: false
Priority Class Name: system-cluster-critical
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing False ProgressDeadlineExceeded
OldReplicaSets: <none>
NewReplicaSet: coredns-6b59b8bd9f (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set coredns-6b59b8bd9f to 1
Кто-нибудь может помочь, пожалуйста?