Kubernetes Multi-Tenant Comment faire cohabiter et isoler des utilisateurs sur un même cluster?
●
TODO
https://www.timelab.io
Multi-Tenant?
Multi-Tenant? ●
Contrôler les accès à l’API kubernetes ○ ○ ○
Identifier les utilisateurs (Authentification) Contrôler ce à quoi ils essaient d’accéder (Authorization) Contrôler le contenu de la requête (Admission Control)
Multi-Tenant? ●
Contrôler les accès à l’API kubernetes ○ ○ ○
●
Identifier les utilisateurs (Authentification) Contrôler ce à quoi ils essaient d’accéder (Authorization) Contrôler le contenu de la requête (Admission Control)
Pod Security Policies
Multi-Tenant? ●
Contrôler les accès à l’API kubernetes ○ ○ ○
● ●
Identifier les utilisateurs (Authentification) Contrôler ce à quoi ils essaient d’accéder (Authorization) Contrôler le contenu de la requête (Admission Control)
Pod Security Policies Isoler les utilisateurs entre eux ○ ○
Réseau: Network Policies Images docker
Contrôle de l’accès à l’API Kubernetes
Source: https://kubernetes.io/docs/admin/accessing-the-api/
Authentification
Authentification
Source: https://kubernetes.io/docs/admin/accessing-the-api/
Authentification ● ●
Kubernetes n’a pas de ressource “user” Kubernetes ne fournit pas d’Identity Provider
Authentification ● ● ●
Kubernetes n’a pas de ressource “user” Kubernetes ne fournit pas d’Identity Provider l’API-server supporte OpenID Connect
Authentification ● ● ● ●
Kubernetes n’a pas de ressource “user” Kubernetes ne fournit pas d’Identity Provider l’API-server supporte OpenID Connect Déployer notre propre Identity Provider ○ ○ ○
CoreOS dex Keycloak CloudFoundry UAA
Authentification ● ● ● ●
Kubernetes n’a pas de ressource “user” Kubernetes ne fournit pas d’Identity Provider l’API-server supporte OpenID Connect Déployer notre propre Identity Provider ○ ○ ○
●
CoreOS dex Keycloak CloudFoundry UAA
OU externaliser l’authentification ○ ○ ○ ○
Google Salesforce Azure Active Directory Auth0
Authentification externe (OpenID Connect + Auth0) Configuration de l’API-server
Ajouter aux options de démarrage: --oidc-issuer-url=https://timelab.auth0.com/ --oidc-client-id=xxxxxxxxxxxxxxxxxxxxxxxx --oidc-username-claim=email
Authentification externe (OpenID Connect + Auth0) Configuration de kubectl ●
récupérer un token JWT:
curl --request POST \ --url 'https://timelab.auth0.com/oauth/token' \ --header 'content-type: application/json' \ --data "{\"grant_type\":\"password\", \"client_id\":\"xxxxx\", \"username\":\"olivier\", \"password\":\"$password\", \"connection\":\"Username-Password-Authentication\", \"scope\":\"openid\"}" \ | jq -r '.id_token'
●
puis le passer à kubectl:
kubectl config set-credentials olivier --token=$TOKEN
Authorization
Authorization
Source: https://kubernetes.io/docs/admin/accessing-the-api/
Authorization Role-Based Access Control (RBAC)
● ●
Par défaut depuis kubernetes v1.6 Ressources concernées: ● Role ● Cluster Role ● Role Binding ● Cluster Role Binding
Cluster Role - Exemple apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: view-pods rules: - apiGroups: - "" resources: - pods verbs: - get - list - watch
Cluster Role - Exemple apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: view-pods rules: - apiGroups: - "" resources: - pods verbs: - get - list - watch
core API group
Cluster Role - Exemple apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: view-pods rules: - apiGroups: - "" resources: - pods verbs: - get - list - watch
core API group nom de la ressource
Cluster Role - Exemple apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: view-pods rules: - apiGroups: - "" resources: - pods
core API group nom de la ressource
verbs: - get - list - watch
liste de permissions
Role Binding - Exemple apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: inspector-can-view-pods namespace: default subjects: - kind: ServiceAccount name: inspector namespace: default roleRef: kind: ClusterRole name: view-pods apiGroup: rbac.authorization.k8s.io
Role Binding - Exemple apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: inspector-can-view-pods namespace: default subjects: - kind: ServiceAccount name: inspector namespace: default roleRef: kind: ClusterRole name: view-pods apiGroup: rbac.authorization.k8s.io
à qui assigne-t-on le rôle?
Role Binding - Exemple apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: inspector-can-view-pods namespace: default subjects: - kind: ServiceAccount name: inspector
à qui assigne-t-on le rôle?
namespace: default roleRef: kind: ClusterRole name: view-pods apiGroup: rbac.authorization.k8s.io
rôle à assigner
Admission Control
Admission Control
Source: https://kubernetes.io/docs/admin/accessing-the-api/
Admission Control ●
Implémenté par des plugins (admission controller) ○ ○
ajoutés à l’API server configurable uniquement au démarrage de l’API server
Admission Control ●
Implémenté par des plugins (admission controller) ○ ○
ajoutés à l’API server configurable uniquement au démarrage de l’API server
Dynamic Admission Control (alpha depuis v1.7)
Admission Control ●
Implémenté par des plugins (admission controller) ○ ○
ajoutés à l’API server configurable uniquement au démarrage de l’API server
Dynamic Admission Control (alpha depuis v1.7) ●
Intercepte les requêtes faites à l’API kubernetes ○ ○
Peut rejeter une requête Peut modifier une requête
Admission Control ●
Implémenté par des plugins (admission controller) ○ ○
ajoutés à l’API server configurable uniquement au démarrage de l’API server
Dynamic Admission Control (alpha depuis v1.7) ●
Intercepte les requêtes faites à l’API kubernetes ○ ○
Peut rejeter une requête Peut modifier une requête
Exemple: LimitRanger Permet de limiter ou de fixer la quantité de ressources utilisées par les pods
Admission Control - Limit Ranger - Exemple 1 kind: Pod apiVersion: v1 metadata: name: nginx spec: containers: - name: nginx image: nginx resources: limits: cpu: "500m" requests: cpu: "100m"
Admission Control - Limit Ranger - Exemple 1 kind: Pod
kind: LimitRange
apiVersion: v1
apiVersion: v1
metadata:
metadata:
name: nginx spec:
name: cpu-limit spec:
containers:
limits:
- name: nginx
- max:
image: nginx resources: limits: cpu: "500m" requests: cpu: "100m"
cpu: "800m" min: cpu: "200m" type: Container
Admission Control - Limit Ranger - Exemple 1 kind: Pod
kind: LimitRange
apiVersion: v1
apiVersion: v1
metadata:
metadata:
name: nginx spec:
name: cpu-limit spec:
containers:
limits:
- name: nginx
- max:
image: nginx
cpu: "800m"
resources:
min:
limits:
cpu: "200m"
cpu: "500m"
type: Container
requests: cpu: "100m"
$ kubectl create -f pod.yaml Error from server (Forbidden): error when creating "pod.yaml": pods "nginx" is forbidden: minimum cpu usage per Container is 200m, but request is 100m.
Admission Control - Limit Ranger - Exemple 2 kind: Pod apiVersion: v1 metadata: name: nginx spec: containers: - name: nginx image: nginx
Admission Control - Limit Ranger - Exemple 2 kind: Pod
kind: LimitRange
apiVersion: v1
apiVersion: v1
metadata:
metadata:
name: nginx spec:
name: default-limits spec:
containers:
limits:
- name: nginx
- defaultRequest:
image: nginx
cpu: 100m memory: 128M type: Container
Admission Control - Limit Ranger - Exemple 2 kind: Pod
kind: LimitRange
apiVersion: v1
apiVersion: v1
metadata:
metadata:
name: nginx spec:
name: default-limits spec:
containers:
limits:
- name: nginx
- defaultRequest:
image: nginx
cpu: 100m memory: 128M type: Container
$ kubectl get pod nginx -o jsonpath='{.spec.containers[*].resources.requests}' map[cpu:100m memory:128m]
Pod Security Policies
Pod Security Policies kind: Pod apiVersion: v1 metadata: name: nginx spec: hostNetwork: true containers: - name: nginx image: nginx ports: - containerPort: 80
Pod Security Policies kind: Pod
kind: Pod
apiVersion: v1
apiVersion: v1
metadata:
metadata:
name: nginx spec: hostNetwork: true containers: - name: nginx image: nginx ports: - containerPort: 80
name: nginx spec: containers: - name: nginx image: nginx volumeMounts: - mountPath: /home name: host-volume volumes: - name: host-volume hostPath: path: /home/kubernetes/bin
Pod Security Policies!
Pod Security Policies! ●
Ressource niveau cluster
Pod Security Policies! ● ●
Ressource niveau cluster Contrôle les actions d’un pod
Pod Security Policies! ● ● ●
Ressource niveau cluster Contrôle les actions d’un pod Gérées par un admission controller
Pod Security Policies! ● ● ● ●
Ressource niveau cluster Contrôle les actions d’un pod Gérées par un admission controller Doit être activé dans l’API server
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PodSecurityPolicy,PersistentVolumeLabel,DefaultStorageC lass,ResourceQuota
Pod Security Policies kind: PodSecurityPolicy apiVersion: extensions/v1beta1 metadata: name: restricted spec: privileged: false fsGroup: rule: 'RunAsAny' runAsUser: rule: 'RunAsAny' seLinux: rule: 'RunAsAny' supplementalGroups: rule: 'RunAsAny' volumes: - 'secret' - 'configMap' - 'persistentVolumeClaim'
Pod Security Policies kind: PodSecurityPolicy apiVersion: extensions/v1beta1 metadata: name: restricted spec: privileged: false fsGroup: rule: 'RunAsAny' runAsUser: rule: 'RunAsAny' seLinux: rule: 'RunAsAny' supplementalGroups: rule: 'RunAsAny' volumes: - 'secret' - 'configMap' - 'persistentVolumeClaim'
Pod Security Policies kind: PodSecurityPolicy apiVersion: extensions/v1beta1 metadata: name: restricted spec: privileged: false fsGroup: rule: 'RunAsAny' runAsUser: rule: 'RunAsAny' seLinux: rule: 'RunAsAny' supplementalGroups: rule: 'RunAsAny' volumes: - 'secret' - 'configMap' - 'persistentVolumeClaim'
Empêche d’accéder aux ressources du host
Pod Security Policies kind: PodSecurityPolicy apiVersion: extensions/v1beta1 metadata: name: restricted spec: privileged: false fsGroup:
Empêche d’accéder aux ressources du host
rule: 'RunAsAny' runAsUser: rule: 'RunAsAny' seLinux: rule: 'RunAsAny' supplementalGroups: rule: 'RunAsAny' volumes: - 'secret' - 'configMap' - 'persistentVolumeClaim'
Restreint l’accès aux types de volumes
Pod Security Policies kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: restricted-psp-user rules: - apiGroups: - extensions resources: - podsecuritypolicies resourceNames: - restricted verbs: - use
Pod Security Policies kind: ClusterRole
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
apiVersion: rbac.authorization.k8s.io/v1
metadata:
metadata:
name: restricted-psp-user
name: restrict-authenticated
rules:
subjects:
- apiGroups:
- kind: Group
- extensions
apiGroup: rbac.authorization.k8s.io
resources:
name: system:authenticated
- podsecuritypolicies
roleRef:
resourceNames:
apiGroup: rbac.authorization.k8s.io
- restricted
kind: ClusterRole
verbs:
name: restricted-psp-user
- use
Network Policies
Network Policies ●
Permet de spécifier comment les pods communiquent entre eux
Network Policies ● ●
Permet de spécifier comment les pods communiquent entre eux Imlémentées par un plugin réseau de kubelet ○ ○ ○ ○
Calico Romana Weave Net Canal (Calico + Flannel)
Network Policies ● ●
Permet de spécifier comment les pods communiquent entre eux Imlémentées par un plugin réseau de kubelet ○ ○ ○ ○
●
Calico Romana Weave Net Canal (Calico + Flannel)
Remplacer le plugin réseau dans la configuration de kubelet : --cni-bin-dir=/home/kubernetes/bin --network-plugin=cni
Network Policies ● ●
Permet de spécifier comment les pods communiquent entre eux Imlémentées par un plugin réseau de kubelet ○ ○ ○ ○
●
Calico Romana Weave Net Canal (Calico + Flannel)
Remplacer le plugin réseau dans la configuration de kubelet : --cni-bin-dir=/home/kubernetes/bin --network-plugin=cni
Chez timelab on utilise Canal (calico + flannel) https://github.com/projectcalico/canal
Network Policies kind: Namespace apiVersion: v1 metadata: name: test
Network Policies kind: Namespace apiVersion: v1 metadata: name: test labels: namespace: test
Network Policies kind: Namespace
kind: NetworkPolicy
apiVersion: v1
apiVersion: networking.k8s.io/v1
metadata:
metadata:
name: test
name: default-deny
labels:
namespace: test
namespace: test
spec: podSelector:
Network Policies kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-connect-to-database namespace: test spec: podSelector: matchLabels: role: database ingress: - from: - podSelector: matchLabels: role: frontend ports: - protocol: TCP port: 9042
Network Policies kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-connect-to-database namespace: test spec: podSelector: matchLabels: role: database ingress: - from: - podSelector: matchLabels: role: frontend ports: - protocol: TCP port: 9042
pods destinations
Network Policies kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-connect-to-database namespace: test spec: podSelector: matchLabels: role: database
pods destinations
ingress: - from: - podSelector: matchLabels: role: frontend ports: - protocol: TCP port: 9042
pods sources
Network Policies kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-connect-to-database namespace: test spec: podSelector: matchLabels: role: database
pods destinations
ingress: - from: - podSelector: matchLabels: role: frontend
pods sources
ports: - protocol: TCP port: 9042
ports des pods destinations
Network Policies kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-intra-namespace namespace: test spec: podSelector: ingress: - from:
Network Policies kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-trusted namespace: test spec: podSelector: ingress: - from: - namespaceSelector: matchLabels: namespace: trusted
Docker Registries
Docker Registries ● Authentification + TLS
● AlwaysPullImages Admission Controller
Merci! Contact: Timelab 1 rue Mahatma Gandhi Aix en Provence, 13100
[email protected] www.timelab.io