Today we will explore the setup of Azure AD pod identity in an Azure Kubernetes Services cluster.
What is Azure AD pod identity?
Azure AD pod identity enables pods running inside your AKS cluster to use a user assigned identity stored in Azure AD to access other Azure resources. You can think of many use cases where this feature can be handy like accessing secrets stored in an Azure Key Vault or an Azure SQL database with Azure AD integration; all without the need to give auserid / password tuple.
Be aware though that this feature is based on an open source project and that you won’t receive support from Azure staff in case of issues. The documentation of the AAD pod identity project is available on the project’s Github pages.
Prerequisites
- AKS cluster with a Managed System Identity (important).
- Azure SQL Server server with an Azure AD admin (important) and a database.
- An Azure Container Registry to store the Docker image that will be used to test the
aad-pod-identity
feature.
Configuration
First we need to check the Managed System Identity assigned to the AKS cluster.
az aks show -g myAKSresourcegroup -n myAKSCluster --query identityProfile.kubeletidentity.clientId -otsv
Assign the roles Managed Identity Operator and Virtual Machine Contributor to the Managed System Identity of the cluster.
az role assignment create --role "Managed Identity Operator" --assignee <AKS cluster ID> --scope /subscriptions/<SubscriptionID>/resourcegroups/<NodeResourceGroup>
az role assignment create --role "Virtual Machine Contributor" --assignee <AKS cluster ID> --scope /subscriptions/<SubscriptionID>/resourcegroups/<NodeResourceGroup>
Note:
<NodeResourceGroup>
is the resource group where the nodes of the AKS cluster are defined. It is the resource group whose name starts withMC_
if you do not specify a name during the cluster creation.
Installation of aad-pod-identity in the AKS cluster
We will use helm 3
to deploy aad-pod-identity
into the AKS cluster. By default, all resources are installed in default
namespace.
# Add aad-pod-identity helm chart repo
helm repo add aad-pod-identity https://raw.githubusercontent.com/Azure/aad-pod-identity/master/charts
# Helm 3
helm install aad-pod-identity aad-pod-identity/aad-pod-identity
Check the installation
kubectl --namespace=default get pods -l "app.kubernetes.io/component=mic"
NAME READY STATUS RESTARTS AGE
aad-pod-identity-mic-84fd88896-8dpgr 1/1 Running 0 42s
aad-pod-identity-mic-84fd88896-vzj69 1/1 Running 0 42s
kubectl --namespace=default get pods -l "app.kubernetes.io/component=nmi"
NAME READY STATUS RESTARTS AGE
aad-pod-identity-nmi-ttf9p 1/1 Running 0 63s
Create an Azure AD user-assigned identity
Now we create an Azure AD user-assigned identity that will be used by our pod to connect to the Azure SQL Server database.
# Create the user-assigned identity
az identity create -g <NodeResourceGroup> -n sqlpodid
# Get the <IdentityClientId> of sqlpodid
az identity show -g <NodeResourceGroup> -n sqlpodid --query clientId -otsv
# Get <IdentityId> of sqlpodid
az identity show -g <NodeResourceGroup> -n sqlpodid --query id -otsv
# The identity should have read access to the resource group
az role assignment create --role Reader --assignee <IdentityId> --scope /subscriptions/<SubscriptionID>/resourcegroups/<NodeResourceGroup> --query id -otsv
Create Azure SQL Server database contained user
Create an Azure SQL Server database contained user that is mapped to an Azure AD identity. We can map this database user to a user-assigned identity or to an Azure AD security group of which the user-assigned identity is member.
Important: this action can only be executed by an Azure AD SQL admin. The reason is that during the creation of the user, Azure SQL database must be able to reach Azure AD on behalf of the logged-in user.
/* Create db contained user for user assigned identity
you can use the principal name or display name */
CREATE USER sqlpodid FROM EXTERNAL PROVIDER;
/* Assign the needed security roles to this db contained user */
ALTER ROLE db_datareader ADD MEMBER sqlpodid;
ALTER ROLE db_datawriter ADD MEMBER sqlpodid;
/* Check that the assignation is ok */
SELECT DP1.name AS DatabaseRoleName,
isnull (DP2.name, 'No members') AS DatabaseUserName
FROM sys.database_role_members AS DRM
RIGHT OUTER JOIN sys.database_principals AS DP1
ON DRM.role_principal_id = DP1.principal_id
LEFT OUTER JOIN sys.database_principals AS DP2
ON DRM.member_principal_id = DP2.principal_id
WHERE DP1.type = 'R'
ORDER BY DP1.name;
Create AzureIdentity in AKS cluster
Create a file named sqlpodid-aad-pod-id.yaml
with the following content:
apiVersion: "aadpodidentity.k8s.io/v1"
kind: AzureIdentity
metadata:
name: sqlpodid
spec:
type: 0
resourceID: <IdentityId>
clientID: <IdentityClientId>
Apply this file.
kubectl apply -f sqlpodid-aad-pod-id.yaml
Create AzureIdentityBinding in AKS cluster
Create and deploy a file sqlpodid-aad-pod-id-binding.yaml
that contains an AzureIdentityBinding that refers to the AzureIdentity created in the step before.
File content:
apiVersion: "aadpodidentity.k8s.io/v1"
kind: AzureIdentityBinding
metadata:
name: sqlpodid-binding
spec:
azureIdentity: sqlpodid
selector: sqlpodid
Deployment
kubectl apply -f sqlpodid-aad-pod-id-binding.yaml
Configure the AKS cluster to connect to Azure Container Registry
The application we use to test the AAD pod identity feature is a modified version of the dotnet core tutorial application that connects to Azure SQL Server. Read this article on my blog to discover how I modified the application to enable the use of managed identities and package it as a Docker image. I published the image in an Azure Container Registry that we link to our cluster.
# Link ACR myregistry to AKS cluster myAKSCluster
az aks update -n myAKSCluster -g myResourceGroup --attach-acr myregistry
Deploy the application in the AKS cluster
As a first test, I decided to deploy the application in the default namespace because it is not configured with any Calico network policies that could potentially interfere.
I used the following yaml manifest that contains
- a deployment
- a service
- an ingress rule
Note: I got some issue with the ingress rule with the rewriting of urls generated by the application but it does still allow to test it.
apiVersion: apps/v1
kind: Deployment
metadata:
name: dotnetcoresqldb-deployment
labels:
app: dotnetcoresqldb
spec:
replicas: 1
selector:
matchLabels:
app: dotnetcoresqldb
template:
metadata:
labels:
app: dotnetcoresqldb
aadpodidbinding: sqlpodid
spec:
containers:
- name: nginx
image: myregistry.azurecr.io/samples/dotnetcoresqldb:latest
ports:
- containerPort: 80
nodeSelector:
kubernetes.io/os: linux
---
apiVersion: v1
kind: Service
metadata:
name: dotnetcoresqldb-service
spec:
type: ClusterIP
ports:
- port: 80
selector:
app: dotnetcoresqldb
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: dotnetcoresqldb-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- http:
paths:
- backend:
serviceName: dotnetcoresqldb-service
servicePort: 80
path: /todo(/|$)(.*)
Once all components are deployed successfully, test the application by calling the external ip address assigned to the ingress load balancer followed by /todo
.
Results: the application works flawlessly!
Remember that we configured the development
namespace of our AKS cluster with some Calico network policies that restrict the network traffic for the resources inside this namespace. Let’s see if we can make the application work inside this namespace.
Deploy the application in a namespace with Calico network policies
After the deployment the application in the development
namespace, it fails with the following error message.
Microsoft.AspNetCore.Diagnostics.ExceptionHandlerMiddleware[1]
An unhandled exception has occurred while executing the request.
System.AggregateException: One or more errors occurred. (Parameters: Connection String: [No connection string specified], Resource: https://database.windows.net/, Authority: . Exception Message: Tried the following 3 methods to get an access token, but none of them worked.
Parameters: Connection String: [No connection string specified], Resource: https://database.windows.net/, Authority: . Exception Message: Tried to get token using Managed Service Identity. Unable to connect to the Instance Metadata Service (IMDS). Skipping request to the Managed Service Identity (MSI) token endpoint.
Obviously, as our policies do lock the network communication for all pods in the development
namespace, the request to the MSI endpoint to receive an identity token is blocked. The policy below solved that issue.
apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
name: msi-allow
namespace: development
spec:
order: 950
# Applies to all endpoints in the namespace
selector: all()
types:
- Egress
egress:
- action: Allow
protocol: TCP
destination:
nets:
- 127.0.0.1/32
ports: [2579]
Apply this policy in the development
namespace and the application works again as a charm.
Conclusion
AAD pod identity is a nice feature that requires some planning, certainly if you implement network policies in your A_KS cluster. It enables to deploy applications in pods that do not require secrets to connect to other Azure resources and thus enhance the security.
Take care.