Automating AKS Deployments like a Boss: Part 8 (AKS and AAD)
In recent weeks I’ve been helping several clients with AKS and RBAC. Specifically, how can we use Azure Kubernetes Service but still leverage our identity management in AD (be it native or synced to AAD). Let’s get started using our own Azure sub which includes AAD. Recall, of course, every Azure account includes Azure Active Directory for identity management (akin to IAM on AWS and GCP).
Pre-reqs
Create a Service Principal
We need to create an SP to use with AKS (and our terraform). You can use the portal, but doing it from the command line is quite simple.
$ az ad sp create-for-rbac --skip-assignment
{
"appId": "910e0b04-80fd-40ef-80d3-9921f9d96420",
"displayName": "azure-cli-2019-07-04-18-51-08",
"name": "http://azure-cli-2019-07-04-18-51-08",
"password": "567a674f-7cba-4292-b7e6-abcdefabcd",
"tenant": "28c575f6-ade1-4838-8e7c-7e6d1ba0eb4a"
}
Create AD applications
Our cluster is going to leverage a client and a server application. Technically, the client is exposed via public API and connects just to the server application to interrogate our AD. These backend pieces need to be created up front and do require someone with Administrator privs (usually a Subscription owner) to approve them.
Server
Go to AAD, App Registrations and choose “+ New Registration”
Give it a name like ADAccessServer (i used aks-rbac), limit it to your org directory and give it a URL - this URL isn’t used but i can confirm things don’t work very well if you skip it.
We will at the top see some key pieces of information of which to take note:
Display name : aks-rbac
Application (client) ID : f61ad37a-3a16-498b-ad8c-812c1a82d541
Directory (tenant) ID : 28c575f6-ade1-4838-8e7c-7e6d1ba0eb4a
Object ID : F3f2929f-5d9a-49c2-9016-d4a5f8e398b3
The Application ID (ending in d541) will be our Server App ID. The Directory ID is our tenant ID (ending in b4a).
Click on manifest and change the groupMembershipClaims from null to “all”.
Next, create a client secret
This value (G*7zQE8?dntvN=czRtoU5V]O2G2VFbIl) will be our server application secret.
Next we need to add an API permission
Pick Delegated Permissions
And directory read all
We also need user read (but is likely already selected)
Next we’ll move to Application Permissions (don’t worry, we’ll save all our changes at once):
And choose Directory.Read.All
Now we are ready to save and Grant Permissions:
The “Grand admin consent” may be grayed out if you are not a directory admin (e.g. Subscription Owner). An admin will need to click Grant in order for this RBAC system to work.
Once Granted you’ll see a green success message:
Next we need to add a scope:
Fill in the defaults and leave it for admins only, then click “Add Scope”
The Client
Next, do a new App registration for the client
Note the details in the output after clicking register:
Display name : AKSAzureADClient
Application (client) ID : dc248705-0b14-4ff2-82c2-5b6a5260c62b
Directory (tenant) ID : 28c575f6-ade1-4838-8e7c-7e6d1ba0eb4a
Object ID : 126f554b-a693-40a4-80b7-7670ff2476a9
The Application ID in this case is “dc248705-0b14-4ff2-82c2-5b6a5260c62b”
Like the server, we’ll need to set some API permissions, however, in this case, select “My APIs” and pick the server we created above.
Once and admin has granted permission we should see a green message:
For Authentication, make sure to set this as the default client type.
Click save.
The last piece of information we need is the Tenant ID which we saw with the applications, however as a double check you can look at the Directory settings
The Directory ID is the tennant ID:
28c575f6-ade1-4838-8e7c-7e6d1ba0eb4a
At this point we now have all the details we need
Server Application ID: f61ad37a-3a16-498b-ad8c-812c1a82d541
Client Application ID: dc248705-0b14-4ff2-82c2-5b6a5260c62b
Server Application Secret: G*7zQE8?dntvN=czRtoU5V]O2G2VFbIl
Tennant ID: 28c575f6-ade1-4838-8e7c-7e6d1ba0eb4a
# The Service Principal
"appId": "910e0b04-80fd-40ef-80d3-9921f9d96420",
"password": "567a674f-7cba-4292-b7e6-abcdefabacd",
"tenant": "28c575f6-ade1-4838-8e7c-7e6d1ba0eb4a"
Method 1: Terraform
Let’s update the variables.tf:
variable "client_id" {
default="910e0b04-80fd-40ef-80d3-9921f9d96420"
}
variable "client_secret" {
default="567a674f-7cba-4292-b7e6-abcdefabcd"
}
variable "server_app_id" {
default="f61ad37a-3a16-498b-ad8c-812c1a82d541"
}
variable "server_app_secret" {
default="G*7zQE8?dntvN=czRtoU5V]O2G2VFbIl"
}
variable "client_app_id" {
default="dc248705-0b14-4ff2-82c2-5b6a5260c62b"
}
variable "tenant_id" {
default="28c575f6-ade1-4838-8e7c-7e6d1ba0eb4a"
}
variable "agent_count" {
default = 3
}
variable "ssh_public_key" {
default = "~/.ssh/id_rsa.pub"
}
variable "dns_prefix" {
default = "idj-k8stest"
}
variable cluster_name {
default = "idj-k8stest"
}
variable resource_group_name {
default = "azure-k8stest"
}
variable location {
default = "Central US"
}
variable log_analytics_workspace_name {
default = "testLogAnalyticsWorkspaceNameNEW"
}
# refer https://azure.microsoft.com/global-infrastructure/services/?products=monitor for log analytics available regions
variable log_analytics_workspace_location {
default = "eastus"
}
# refer https://azure.microsoft.com/pricing/details/monitor/ for log analytics pricing
variable log_analytics_workspace_sku {
default = "PerGB2018"
}
And add a block for AAD in the main.tf as well:
resource "azurerm_resource_group" "k8s" {
name = "${var.resource_group_name}"
location = "${var.location}"
}
resource "azurerm_log_analytics_workspace" "test" {
name = "${var.log_analytics_workspace_name}"
location = "${var.log_analytics_workspace_location}"
resource_group_name = "${azurerm_resource_group.k8s.name}"
sku = "${var.log_analytics_workspace_sku}"
}
resource "azurerm_log_analytics_solution" "test" {
solution_name = "ContainerInsights"
location = "${azurerm_log_analytics_workspace.test.location}"
resource_group_name = "${azurerm_resource_group.k8s.name}"
workspace_resource_id = "${azurerm_log_analytics_workspace.test.id}"
workspace_name = "${azurerm_log_analytics_workspace.test.name}"
plan {
publisher = "Microsoft"
product = "OMSGallery/ContainerInsights"
}
}
resource "azurerm_kubernetes_cluster" "k8s" {
name = "${var.cluster_name}"
location = "${azurerm_resource_group.k8s.location}"
resource_group_name = "${azurerm_resource_group.k8s.name}"
dns_prefix = "${var.dns_prefix}"
linux_profile {
admin_username = "ubuntu"
ssh_key {
key_data = "${file("${var.ssh_public_key}")}"
}
}
role_based_access_control {
enabled = true
azure_active_directory {
server_app_id = "${var.server_app_id}"
server_app_secret = "${var.server_app_secret}"
client_app_id = "${var.client_app_id}"
tenant_id = "${var.tenant_id}"
}
}
agent_pool_profile {
name = "agentpool"
count = "${var.agent_count}"
vm_size = "Standard_DS1_v2"
os_type = "Linux"
os_disk_size_gb = 30
}
service_principal {
client_id = "${var.client_id}"
client_secret = "${var.client_secret}"
}
addon_profile {
oms_agent {
enabled = true
log_analytics_workspace_id = "${azurerm_log_analytics_workspace.test.id}"
}
}
tags {
Environment = "Development"
}
}
the output.tf stays as it was in our last AKS guide
output "client_key" {
value = "${azurerm_kubernetes_cluster.k8s.kube_config.0.client_key}"
}
output "client_certificate" {
value = "${azurerm_kubernetes_cluster.k8s.kube_config.0.client_certificate}"
}
output "cluster_ca_certificate" {
value = "${azurerm_kubernetes_cluster.k8s.kube_config.0.cluster_ca_certificate}"
}
output "cluster_username" {
value = "${azurerm_kubernetes_cluster.k8s.kube_config.0.username}"
}
output "cluster_password" {
value = "${azurerm_kubernetes_cluster.k8s.kube_config.0.password}"
}
output "kube_config" {
value = "${azurerm_kubernetes_cluster.k8s.kube_config_raw}"
}
output "host" {
value = "${azurerm_kubernetes_cluster.k8s.kube_config.0.host}"
}
Now we can plan:
$ terraform plan -out out.plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
azurerm_resource_group.k8s: Refreshing state... (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-8fed74cbb22d/resourceGroups/azure-k8stest)
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
+ azurerm_kubernetes_cluster.k8s
id: <computed>
addon_profile.#: "1"
addon_profile.0.oms_agent.#: "1"
addon_profile.0.oms_agent.0.enabled: "true"
addon_profile.0.oms_agent.0.log_analytics_workspace_id: "${azurerm_log_analytics_workspace.test.id}"
agent_pool_profile.#: "1"
agent_pool_profile.0.count: "3"
agent_pool_profile.0.dns_prefix: <computed>
agent_pool_profile.0.fqdn: <computed>
agent_pool_profile.0.max_pods: <computed>
agent_pool_profile.0.name: "agentpool"
agent_pool_profile.0.os_disk_size_gb: "30"
agent_pool_profile.0.os_type: "Linux"
agent_pool_profile.0.type: "AvailabilitySet"
agent_pool_profile.0.vm_size: "Standard_DS1_v2"
dns_prefix: "idj-k8stest"
fqdn: <computed>
kube_admin_config.#: <computed>
kube_admin_config_raw: <computed>
kube_config.#: <computed>
kube_config_raw: <computed>
kubernetes_version: <computed>
linux_profile.#: "1"
linux_profile.0.admin_username: "ubuntu"
linux_profile.0.ssh_key.#: "1"
linux_profile.0.ssh_key.0.key_data: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC8kZzEtk7J7Mvv4hJIE1jcQ0q6h41g5hUwPtOUPjNWPIKm4djmy4+C4+Gtsxxh5jUFooAbwl+DubFZogbU1Q5aLOGKSsD/K4XimTyOhr90DO47naCnaSS0Rg0XyZlvQsHKwcXGuGOleCMhB2gQ70QAK4X/N1dvGfqCDdKBbTORKQyz0WHWo7YGA6YAgtvzn1C5W0l7cT0AXgOfFEAGF31nqqTuRVBbBmosq1qhXJlVt+PO32MqmxZv44ZuCP1jWjyTz1rbQ1OLHCxP/+eDIlpOlkYop4XgwiHHMRn/rxHFTKOAxtFOccFw9KEnDM0j0M5FRBj5qU1BCa/6jhnu7LIz"
location: "centralus"
name: "idj-k8stest"
network_profile.#: <computed>
node_resource_group: <computed>
resource_group_name: "azure-k8stest"
role_based_access_control.#: "1"
role_based_access_control.0.azure_active_directory.#: "1"
role_based_access_control.0.azure_active_directory.0.client_app_id: "dc248705-0b14-4ff2-82c2-5b6a5260c62b"
role_based_access_control.0.azure_active_directory.0.server_app_id: "f61ad37a-3a16-498b-ad8c-812c1a82d541"
role_based_access_control.0.azure_active_directory.0.server_app_secret: <sensitive>
role_based_access_control.0.azure_active_directory.0.tenant_id: "28c575f6-ade1-4838-8e7c-7e6d1ba0eb4a"
role_based_access_control.0.enabled: "true"
service_principal.#: "1"
service_principal.326775546.client_id: "910e0b04-80fd-40ef-80d3-9921f9d96420"
service_principal.326775546.client_secret: <sensitive>
tags.%: "1"
tags.Environment: "Development"
+ azurerm_log_analytics_solution.test
id: <computed>
location: "eastus"
plan.#: "1"
plan.0.name: <computed>
plan.0.product: "OMSGallery/ContainerInsights"
plan.0.publisher: "Microsoft"
resource_group_name: "azure-k8stest"
solution_name: "ContainerInsights"
workspace_name: "testLogAnalyticsWorkspaceNameNEW"
workspace_resource_id: "${azurerm_log_analytics_workspace.test.id}"
+ azurerm_log_analytics_workspace.test
id: <computed>
location: "eastus"
name: "testLogAnalyticsWorkspaceNameNEW"
portal_url: <computed>
primary_shared_key: <computed>
resource_group_name: "azure-k8stest"
retention_in_days: <computed>
secondary_shared_key: <computed>
sku: "PerGB2018"
tags.%: <computed>
workspace_id: <computed>
Plan: 3 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
This plan was saved to: out.plan
To perform exactly these actions, run the following command to apply:
terraform apply "out.plan"
And apply the plan
$ terraform apply "out.plan"
azurerm_log_analytics_workspace.test: Creating...
location: "" => "eastus"
name: "" => "testLogAnalyticsWorkspaceNameNEW"
portal_url: "" => "<computed>"
primary_shared_key: "<sensitive>" => "<sensitive>"
resource_group_name: "" => "azure-k8stest"
retention_in_days: "" => "<computed>"
secondary_shared_key: "<sensitive>" => "<sensitive>"
sku: "" => "PerGB2018"
tags.%: "" => "<computed>"
workspace_id: "" => "<computed>"
azurerm_log_analytics_workspace.test: Still creating... (10s elapsed)
azurerm_log_analytics_workspace.test: Still creating... (20s elapsed)
azurerm_log_analytics_workspace.test: Still creating... (30s elapsed)
azurerm_log_analytics_workspace.test: Still creating... (40s elapsed)
azurerm_log_analytics_workspace.test: Still creating... (50s elapsed)
azurerm_log_analytics_workspace.test: Still creating... (1m0s elapsed)
azurerm_log_analytics_workspace.test: Creation complete after 1m8s (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-...paces/testloganalyticsworkspacenamenew)
azurerm_log_analytics_solution.test: Creating...
location: "" => "eastus"
plan.#: "" => "1"
plan.0.name: "" => "<computed>"
plan.0.product: "" => "OMSGallery/ContainerInsights"
plan.0.publisher: "" => "Microsoft"
resource_group_name: "" => "azure-k8stest"
solution_name: "" => "ContainerInsights"
workspace_name: "" => "testLogAnalyticsWorkspaceNameNEW"
workspace_resource_id: "" => "/subscriptions/d955c0ba-13dc-44cf-a29a-8fed74cbb22d/resourcegroups/azure-k8stest/providers/microsoft.operationalinsights/workspaces/testloganalyticsworkspacenamenew"
azurerm_kubernetes_cluster.k8s: Creating...
addon_profile.#: "" => "1"
addon_profile.0.oms_agent.#: "" => "1"
addon_profile.0.oms_agent.0.enabled: "" => "true"
addon_profile.0.oms_agent.0.log_analytics_workspace_id: "" => "/subscriptions/d955c0ba-13dc-44cf-a29a-8fed74cbb22d/resourcegroups/azure-k8stest/providers/microsoft.operationalinsights/workspaces/testloganalyticsworkspacenamenew"
agent_pool_profile.#: "" => "1"
agent_pool_profile.0.count: "" => "3"
agent_pool_profile.0.dns_prefix: "" => "<computed>"
agent_pool_profile.0.fqdn: "" => "<computed>"
agent_pool_profile.0.max_pods: "" => "<computed>"
agent_pool_profile.0.name: "" => "agentpool"
agent_pool_profile.0.os_disk_size_gb: "" => "30"
agent_pool_profile.0.os_type: "" => "Linux"
agent_pool_profile.0.type: "" => "AvailabilitySet"
agent_pool_profile.0.vm_size: "" => "Standard_DS1_v2"
dns_prefix: "" => "idj-k8stest"
fqdn: "" => "<computed>"
kube_admin_config.#: "" => "<computed>"
kube_admin_config_raw: "<sensitive>" => "<sensitive>"
kube_config.#: "" => "<computed>"
kube_config_raw: "<sensitive>" => "<sensitive>"
kubernetes_version: "" => "<computed>"
linux_profile.#: "" => "1"
linux_profile.0.admin_username: "" => "ubuntu"
linux_profile.0.ssh_key.#: "" => "1"
linux_profile.0.ssh_key.0.key_data: "" => "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC8kZzEtk7J7Mvv4hJIE1jcQ0q6h41g5hUwPtOUPjNWPIKm4djmy4+C4+Gtsxxh5jUFooAbwl+DubFZogbU1Q5aLOGKSsD/K4XimTyOhr90DO47naCnaSS0Rg0XyZlvQsHKwcXGuGOleCMhB2gQ70QAK4X/N1dvGfqCDdKBbTORKQyz0WHWo7YGA6YAgtvzn1C5W0l7cT0AXgOfFEAGF31nqqTuRVBbBmosq1qhXJlVt+PO32MqmxZv44ZuCP1jWjyTz1rbQ1OLHCxP/+eDIlpOlkYop4XgwiHHMRn/rxHFTKOAxtFOccFw9KEnDM0j0M5FRBj5qU1BCa/6jhnu7LIz"
location: "" => "centralus"
name: "" => "idj-k8stest"
network_profile.#: "" => "<computed>"
node_resource_group: "" => "<computed>"
resource_group_name: "" => "azure-k8stest"
role_based_access_control.#: "" => "1"
role_based_access_control.0.azure_active_directory.#: "" => "1"
role_based_access_control.0.azure_active_directory.0.client_app_id: "" => "dc248705-0b14-4ff2-82c2-5b6a5260c62b"
role_based_access_control.0.azure_active_directory.0.server_app_id: "" => "f61ad37a-3a16-498b-ad8c-812c1a82d541"
role_based_access_control.0.azure_active_directory.0.server_app_secret: "<sensitive>" => "<sensitive>"
role_based_access_control.0.azure_active_directory.0.tenant_id: "" => "28c575f6-ade1-4838-8e7c-7e6d1ba0eb4a"
role_based_access_control.0.enabled: "" => "true"
service_principal.#: "" => "1"
service_principal.326775546.client_id: "" => "910e0b04-80fd-40ef-80d3-9921f9d96420"
service_principal.326775546.client_secret: "<sensitive>" => "<sensitive>"
tags.%: "" => "1"
tags.Environment: "" => "Development"
azurerm_log_analytics_solution.test: Creation complete after 1s (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-...ghts(testLogAnalyticsWorkspaceNameNEW))
azurerm_kubernetes_cluster.k8s: Still creating... (10s elapsed)
azurerm_kubernetes_cluster.k8s: Still creating... (20s elapsed)
azurerm_kubernetes_cluster.k8s: Still creating... (30s elapsed)
azurerm_kubernetes_cluster.k8s: Still creating... (40s elapsed)
azurerm_kubernetes_cluster.k8s: Still creating... (50s elapsed)
azurerm_kubernetes_cluster.k8s: Still creating... (1m0s elapsed)
azurerm_kubernetes_cluster.k8s: Still creating... (1m10s elapsed)
azurerm_kubernetes_cluster.k8s: Still creating... (1m20s elapsed)
azurerm_kubernetes_cluster.k8s: Still creating... (1m30s elapsed)
azurerm_kubernetes_cluster.k8s: Still creating... (1m40s elapsed)
azurerm_kubernetes_cluster.k8s: Still creating... (1m50s elapsed)
…
azurerm_kubernetes_cluster.k8s: Still creating... (9m10s elapsed)
azurerm_kubernetes_cluster.k8s: Creation complete after 9m17s (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-...nerService/managedClusters/idj-k8stest)
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
Outputs:
client_certificate =
client_key =
cluster_ca_certificate = LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUV5RENDQXJDZ0F3SUJBZ0lSQUk2bTg4dUxXK2JIY2paM2pNeXhuNDR3RFFZSktvWklodmNOQVFFTEJRQXcKRFRFTE1Ba0dBMVVFQXhNQ1kyRXdIaGNOTVRrd056QTBNakEwTnpVMldoY05ORGt3TmpJMk1qQTFOelUyV2pBTgpNUXN3Q1FZRFZRUURFd0pqWVRDQ0FpSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnSVBBRENDQWdvQ2dnSUJBTFo1CmFrZ2RWVWFVaDJVZTNnZ2ZZZkkrVWlVZEhwenFYcVdCcTROenB4UHdRV1hFSFB3Qm9HTEwvUDVibHdySHNrSVcKVnpBQ0xKTXJXVVFzWVVYYVFKd09lWXRBVnZ1TG5rcEQrQmRqZndITG5TUjN4TXRhbXRieUd3c3lXSk9ySEhlbgpiMDlnUjlMVVlqSE43dkZUMk9MUDVBcmh0L3pQc095QmZiZS9nK0xQQi92eVJkaGtxS2tRckxzNTZrRC82YVM2CmVDYkhrSmJPa3dQd0o2UnJVaUYxZmh4bjlYYmdQVHJFZHpCd0J5N1F3bWRkSlg1Qk1pZ01wc05sZ3NxS05TQjcKdFZPWVV5eVAySHBiTktwNUVWV3pac0x4ODVyanpLNGFKVU9MRDZiam5BSGZhZFVWbU8xaUNGM3BiOW9xanhFQQpQK3FYVGZra0o3akk0dHAyRGRZQytrbVlVM0o4M0VJNVk5K2Z0UGdjeU1WVmNENVgrNnZxRDI4eWQ1eXFZK01iCmRyM3htM2tPN2twdmRHVkJhZ0sxanc0Zyt5aWxLMmQzTU5ZWjVLN3BieWZ6VHNXZXdNb0lNZjVyL1J4ZDZLbi8KdFlnK1NndmZxRzdBb25BeEhFR1A1M2s5TU5xbFpEODM5SXFhcU5WUllOR29pRlpOdlI4N2g4a2U0eW4ySTNBZAorRDJIU0xnSHhjdGtuT2JiZ3k3eU82NWdpZ2lBZ0ZVMG12dnNQdnZHcG9tZG1UNTczNlhxRkhEelpJMkUyZGVDCkJLdnRkQjlnbUxSWEhRS1NUMnB2NC9vanJXSU1lZklNY00zZ0tmWmRJdHY3RVdRUTVGZzlFZ3VaTVdYbFZXSW4KZjBvZzVmU3IzNUIyb29DUmJnYkpIWkRpdzRoV0xWdDY2T1hOdVYvRkFnTUJBQUdqSXpBaE1BNEdBMVVkRHdFQgovd1FFQXdJQ3BEQVBCZ05WSFJNQkFmOEVCVEFEQVFIL01BMEdDU3FHU0liM0RRRUJDd1VBQTRJQ0FRQUlzMFVICktZZFBlMkFOUUNudzdwVi9LZmNnRkRYdmFBSGd3cy85OXN1Q1BrZTlGT2hGREwxQUJXVXFxbHhTZjUxeVdncHAKMkhiR3h1M2dOZDFXRHVrMnlwV0NrN01rclpMcDMybmpjSnlKSkZsYU9Bd3djR1NxdEl0VmtveWVESXlHUVRVVQp5VkdGSVlkY2syaHUva01xVlRwZFVNSThVdUNnYTVsQVdPQjdpNm01d2JJeGdOVTFCd3NNaGhud09qd0tlKy9ECnNPMFZaSGgzY1QrZ3luVVU1SEV2d3NvQVRVbWFWcmNDYkRLSVBtYmI4aTFuZDE1YSs1Y0dDSFFoK0Jpb3YwamgKSnR4Ti9mdVRZZ3YxZndndzhiYkZzTGh0ZTBwcHdmVkp0VElON3JqODlNajljcVNrVC9QRnk5eDI1Z2h1TnE2RgpHRW04U2RZd3BuRmJaK0FWZmlkekkwMXNDeG1qbnlMdG5nQVNVam1iNzFzUHlkekpFSzZTS1grZlJyZERHcG1CCm1xZTN4UUNPOTErajBpdEVwZ3pRdjB1Y2JzMjJ1R2loOXc4TjIwNFdtUlZyK0JnVjNJbXlFMmxCYzIzZVpsbVMKQk5sTHlQWXJrRHI3TEgwdmk2bjZpS3FNTTNvMFFtZzkzS3RueGd4cGptZEZZM1RZZlk1T0IxbnUxQmdFLzJEKwpEdWJFNGxWUFZ6UVJaOXBRdE4rY3VGWERNby96UFNwVitkc2dxZGxXbUlWVDlhSEtreDcva0xKKzJWSWxXL0ptCnR3Y1JXbG1pdkVvNXpONkpmYXBvOW0xOWxvemlNUS9VNDhjS1dEMlhuMkExWnF1VW9jZzA3Umczbyt3cUtTYVgKZ1NrVmVWOTdhM056Z0FCQi9FZU1YbDAwdlBiaFEvbnhHb0lnMnc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
cluster_password =
cluster_username = clusterUser_azure-k8stest_idj-k8stest
host = https://idj-k8stest-b5c326aa.hcp.centralus.azmk8s.io:443
kube_config = apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUV5RENDQXJDZ0F3SUJBZ0lSQUk2bTg4dUxXK2JIY2paM2pNeXhuNDR3RFFZSktvWklodmNOQVFFTEJRQXcKRFRFTE1Ba0dBMVVFQXhNQ1kyRXdIaGNOTVRrd056QTBNakEwTnpVMldoY05ORGt3TmpJMk1qQTFOelUyV2pBTgpNUXN3Q1FZRFZRUURFd0pqWVRDQ0FpSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnSVBBRENDQWdvQ2dnSUJBTFo1CmFrZ2RWVWFVaDJVZTNnZ2ZZZkkrVWlVZEhwenFYcVdCcTROenB4UHdRV1hFSFB3Qm9HTEwvUDVibHdySHNrSVcKVnpBQ0xKTXJXVVFzWVVYYVFKd09lWXRBVnZ1TG5rcEQrQmRqZndITG5TUjN4TXRhbXRieUd3c3lXSk9ySEhlbgpiMDlnUjlMVVlqSE43dkZUMk9MUDVBcmh0L3pQc095QmZiZS9nK0xQQi92eVJkaGtxS2tRckxzNTZrRC82YVM2CmVDYkhrSmJPa3dQd0o2UnJVaUYxZmh4bjlYYmdQVHJFZHpCd0J5N1F3bWRkSlg1Qk1pZ01wc05sZ3NxS05TQjcKdFZPWVV5eVAySHBiTktwNUVWV3pac0x4ODVyanpLNGFKVU9MRDZiam5BSGZhZFVWbU8xaUNGM3BiOW9xanhFQQpQK3FYVGZra0o3akk0dHAyRGRZQytrbVlVM0o4M0VJNVk5K2Z0UGdjeU1WVmNENVgrNnZxRDI4eWQ1eXFZK01iCmRyM3htM2tPN2twdmRHVkJhZ0sxanc0Zyt5aWxLMmQzTU5ZWjVLN3BieWZ6VHNXZXdNb0lNZjVyL1J4ZDZLbi8KdFlnK1NndmZxRzdBb25BeEhFR1A1M2s5TU5xbFpEODM5SXFhcU5WUllOR29pRlpOdlI4N2g4a2U0eW4ySTNBZAorRDJIU0xnSHhjdGtuT2JiZ3k3eU82NWdpZ2lBZ0ZVMG12dnNQdnZHcG9tZG1UNTczNlhxRkhEelpJMkUyZGVDCkJLdnRkQjlnbUxSWEhRS1NUMnB2NC9vanJXSU1lZklNY00zZ0tmWmRJdHY3RVdRUTVGZzlFZ3VaTVdYbFZXSW4KZjBvZzVmU3IzNUIyb29DUmJnYkpIWkRpdzRoV0xWdDY2T1hOdVYvRkFnTUJBQUdqSXpBaE1BNEdBMVVkRHdFQgovd1FFQXdJQ3BEQVBCZ05WSFJNQkFmOEVCVEFEQVFIL01BMEdDU3FHU0liM0RRRUJDd1VBQTRJQ0FRQUlzMFVICktZZFBlMkFOUUNudzdwVi9LZmNnRkRYdmFBSGd3cy85OXN1Q1BrZTlGT2hGREwxQUJXVXFxbHhTZjUxeVdncHAKMkhiR3h1M2dOZDFXRHVrMnlwV0NrN01rclpMcDMybmpjSnlKSkZsYU9Bd3djR1NxdEl0VmtveWVESXlHUVRVVQp5VkdGSVlkY2syaHUva01xVlRwZFVNSThVdUNnYTVsQVdPQjdpNm01d2JJeGdOVTFCd3NNaGhud09qd0tlKy9ECnNPMFZaSGgzY1QrZ3luVVU1SEV2d3NvQVRVbWFWcmNDYkRLSVBtYmI4aTFuZDE1YSs1Y0dDSFFoK0Jpb3YwamgKSnR4Ti9mdVRZZ3YxZndndzhiYkZzTGh0ZTBwcHdmVkp0VElON3JqODlNajljcVNrVC9QRnk5eDI1Z2h1TnE2RgpHRW04U2RZd3BuRmJaK0FWZmlkekkwMXNDeG1qbnlMdG5nQVNVam1iNzFzUHlkekpFSzZTS1grZlJyZERHcG1CCm1xZTN4UUNPOTErajBpdEVwZ3pRdjB1Y2JzMjJ1R2loOXc4TjIwNFdtUlZyK0JnVjNJbXlFMmxCYzIzZVpsbVMKQk5sTHlQWXJrRHI3TEgwdmk2bjZpS3FNTTNvMFFtZzkzS3RueGd4cGptZEZZM1RZZlk1T0IxbnUxQmdFLzJEKwpEdWJFNGxWUFZ6UVJaOXBRdE4rY3VGWERNby96UFNwVitkc2dxZGxXbUlWVDlhSEtreDcva0xKKzJWSWxXL0ptCnR3Y1JXbG1pdkVvNXpONkpmYXBvOW0xOWxvemlNUS9VNDhjS1dEMlhuMkExWnF1VW9jZzA3Umczbyt3cUtTYVgKZ1NrVmVWOTdhM056Z0FCQi9FZU1YbDAwdlBiaFEvbnhHb0lnMnc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://idj-k8stest-b5c326aa.hcp.centralus.azmk8s.io:443
name: idj-k8stest
contexts:
- context:
cluster: idj-k8stest
user: clusterUser_azure-k8stest_idj-k8stest
name: idj-k8stest
current-context: idj-k8stest
kind: Config
preferences: {}
users:
- name: clusterUser_azure-k8stest_idj-k8stest
user:
auth-provider:
config:
apiserver-id: f61ad37a-3a16-498b-ad8c-812c1a82d541
client-id: dc248705-0b14-4ff2-82c2-5b6a5260c62b
environment: AZUREPUBLICCLOUD
tenant-id: 28c575f6-ade1-4838-8e7c-7e6d1ba0eb4a
name: azure
Method 2 : CLI
az aks create \
--resource-group idj-azure-k8stest \
--name k8stest \
--generate-ssh-keys \
--aad-server-app-id f61ad37a-3a16-498b-ad8c-812c1a82d541 \
--aad-server-app-secret G*7zQE8?dntvN=czRtoU5V]O2G2VFbIl \
--aad-client-app-id dc248705-0b14-4ff2-82c2-5b6a5260c62b \
--aad-tenant-id 28c575f6-ade1-4838-8e7c-7e6d1ba0eb4a
Testing/Using the Cluster
First test the non-admin kubeconfig blocks us:
$ terraform output kube_config > ~/.kube/config
$ kubectl get pods
To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code FWW6WHNSZ to authenticate.
Error from server (Forbidden): pods is forbidden: User "1f5d835c-b129-41e6-b2fe-5858a5f4e41a" cannot list resource "pods" in API group "" in the namespace "default"
Next test that Admin is working:
$ rm ~/.kube/config
$ az aks get-credentials --resource-group azure-k8stest --name idj-k8stest --admin
Merged "idj-k8stest-admin" as current context in /Users/isaac.johnson/.kube/config
Next we need to add a user
The get Object ID from AD for the user you wish to add:
You'll notice the Object ID of "1f5d835c-b129-41e6-b2fe-5858a5f4e41a" matches the first error in our test showing our Cluster is interacting with AAD properly.
For the simple case, we're going to set my user up as a cluster-admin, one of the built-in default ClusterRoles
$ cat rbac-aad-user.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: my-cluster-admins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: 1f5d835c-b129-41e6-b2fe-5858a5f4e41a
$ kubectl apply -f rbac-aad-user.yaml
clusterrolebinding.rbac.authorization.k8s.io/my-cluster-admins created
This time when i try, it succeeds.
$ kubectl get pods --all-namespaces
To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code F8GY5W5WV to authenticate.
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-67fd67489b-j8zj8 1/1 Running 0 22m
kube-system coredns-67fd67489b-rvglr 1/1 Running 0 16m
kube-system coredns-autoscaler-f654c64fd-nfxxl 1/1 Running 0 22m
kube-system heapster-6d879b9dc8-nv5pk 2/2 Running 0 15m
kube-system kube-proxy-7zkvb 1/1 Running 0 17m
kube-system kube-proxy-t9cx9 1/1 Running 0 17m
kube-system kube-proxy-tc2qc 1/1 Running 0 17m
kube-system kubernetes-dashboard-7b55c6f7b9-nhglv 1/1 Running 1 22m
kube-system metrics-server-67c75dbf7-59kn4 1/1 Running 1 22m
kube-system omsagent-4fxlt 1/1 Running 0 17m
kube-system omsagent-rs-7fb57f975d-pn687 1/1 Running 0 22m
kube-system omsagent-sx8cl 1/1 Running 0 17m
kube-system omsagent-vwtpn 1/1 Running 0 17m
kube-system tunnelfront-84b877887-7ks42 1/1 Running 0 22m
Service Users
One activity you will want to address is non AD access by build/CICD systems. To do so we will want to define:
- A dedicated namespace for automation activities
- A service user as our designated actor
- A cluster role that narrowly defines what the actor can perform
- Lastly, a cluster role binding to tie the actor and the role.
First we create a namespace and a service role:
$ cat namespace_and_role.yaml
kind: Namespace
apiVersion: v1
metadata:
name: automation
labels:
name: automation
role: developer
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: automation-sa
namespace: automation
selfLink: /api/v1/namespaces/automation/serviceaccounts/automation-sa
$ kubectl apply -f namespace_and_role.yaml
namespace/automation created
serviceaccount/automation-sa created
Now you can fetch the token from the service account:
First we can double check the sa details for the token name:
$ kubectl get sa automation-sa -n automation -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"ServiceAccount","metadata":{"annotations":{},"name":"automation-sa","namespace":"automation","selfLink":"/api/v1/namespaces/automation/serviceaccounts/automation-sa"}}
creationTimestamp: "2019-07-22T03:13:58Z"
name: automation-sa
namespace: automation
resourceVersion: "28331664"
selfLink: /api/v1/namespaces/automation/serviceaccounts/automation-sa
uid: bf46398e-ac2e-11e9-a39d-aa461e618eaf
secrets:
- name: automation-sa-token-jf7qw
We can now fetch the token..
kubectl -n automation get secret $(kubectl -n automation get secret | grep automation-sa | awk '{print $1}') -o json | jq -r '.data.token' | base64 -D
eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJhdXRvbWF0aW9uIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3GciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJhdXRvbWF0aW9uIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3GciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJhdXRvbWF0aW9uIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3
testing the token
$ export MYTOKEN=`kubectl -n automation get secret $(kubectl -n automation get secret | grep automation-sa | awk '{print $1}') -o json | jq -r '.data.token' | base64 -D`
$ rm ~/.kube/config
$ kubectl get pods -n automation --server=https://uscd-dev4-543b11b2.hcp.centralus.azmk8s.io:443 --token=$MYTOKEN
Unable to connect to the server: x509: certificate signed by unknown authority
If you get that error, you’ll likely need to skip TLS verification as the signing authority isn’t known to your host.
$ kubectl get pods -n automation --server=https://uscd-dev4-543b11b2.hcp.centralus.azmk8s.io:443 --token=$MYTOKEN --insecure-skip-tls-verify="true"
No resources found.
As for tightening controls, login with admin credentials again, only this time let’s create an automation ClusterRole:
$ cat devops-clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: automation-clusterrole
rules:
- apiGroups:
- ""
resources:
- deployments
- pods
- nodes
- services
- replicasets
- daemonsets
verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
- logs
- apiGroups:
- apps
resources:
- deployments
verbs:
- create
- get
- list
- watch
- apiGroups:
- extensions
resources:
- deployments
verbs:
- create
- get
- list
- watch
And then bind it to our service account:
$ cat role_and_binding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: automation-sa-rolebinding
namespace: automation
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: automation-clusterrole
subjects:
- kind: ServiceAccount
name: automation-sa
namespace: automation
- kind: User
name: system:serviceaccount:automation:automation-sa
apiGroup: ""
$ kubectl apply -f role_and_binding.yaml
clusterrole.rbac.authorization.k8s.io/automation-clusterrole created
rolebinding.rbac.authorization.k8s.io/automation-sa-rolebinding created
Now future queries should be limited to the abilities in our role definition
Cleanup
$ terraform destroy
azurerm_resource_group.k8s: Refreshing state... (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-8fed74cbb22d/resourceGroups/azure-k8stest)
azurerm_log_analytics_workspace.test: Refreshing state... (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-...paces/testloganalyticsworkspacenamenew)
azurerm_log_analytics_solution.test: Refreshing state... (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-...ghts(testLogAnalyticsWorkspaceNameNEW))
azurerm_kubernetes_cluster.k8s: Refreshing state... (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-...nerService/managedClusters/idj-k8stest)
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
- destroy
Terraform will perform the following actions:
- azurerm_kubernetes_cluster.k8s
- azurerm_log_analytics_solution.test
- azurerm_log_analytics_workspace.test
- azurerm_resource_group.k8s
Plan: 0 to add, 0 to change, 4 to destroy.
Do you really want to destroy?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.
Enter a value: yes
azurerm_kubernetes_cluster.k8s: Destroying... (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-...nerService/managedClusters/idj-k8stest)
azurerm_log_analytics_solution.test: Destroying... (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-...ghts(testLogAnalyticsWorkspaceNameNEW))
azurerm_log_analytics_solution.test: Destruction complete after 1s
azurerm_kubernetes_cluster.k8s: Still destroying... (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-...nerService/managedClusters/idj-k8stest, 10s elapsed)
azurerm_kubernetes_cluster.k8s: Still destroying... (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-...nerService/managedClusters/idj-k8stest, 20s elapsed)
azurerm_kubernetes_cluster.k8s: Still destroying... (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-...nerService/managedClusters/idj-k8stest, 30s elapsed)
azurerm_kubernetes_cluster.k8s: Still destroying... (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-...nerService/managedClusters/idj-k8stest, 40s elapsed)
azurerm_kubernetes_cluster.k8s: Still destroying... (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-...nerService/managedClusters/idj-k8stest, 50s elapsed)
azurerm_kubernetes_cluster.k8s: Still destroying... (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-...nerService/managedClusters/idj-k8stest, 1m0s elapsed)
azurerm_kubernetes_cluster.k8s: Still destroying... (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-...nerService/managedClusters/idj-k8stest, 1m10s elapsed)
azurerm_kubernetes_cluster.k8s: Still destroying... (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-...nerService/managedClusters/idj-k8stest, 1m20s elapsed)
azurerm_kubernetes_cluster.k8s: Still destroying... (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-...nerService/managedClusters/idj-k8stest, 1m30s elapsed)
azurerm_kubernetes_cluster.k8s: Still destroying... (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-...nerService/managedClusters/idj-k8stest, 1m40s elapsed)
azurerm_kubernetes_cluster.k8s: Still destroying... (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-...nerService/managedClusters/idj-k8stest, 1m50s elapsed)
azurerm_kubernetes_cluster.k8s: Still destroying... (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-...nerService/managedClusters/idj-k8stest, 2m0s elapsed)
azurerm_kubernetes_cluster.k8s: Still destroying... (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-...nerService/managedClusters/idj-k8stest, 2m10s elapsed)
azurerm_kubernetes_cluster.k8s: Still destroying... (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-...nerService/managedClusters/idj-k8stest, 2m20s elapsed)
azurerm_kubernetes_cluster.k8s: Still destroying... (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-...nerService/managedClusters/idj-k8stest, 2m30s elapsed)
azurerm_kubernetes_cluster.k8s: Still destroying... (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-...nerService/managedClusters/idj-k8stest, 2m40s elapsed)
azurerm_kubernetes_cluster.k8s: Still destroying... (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-...nerService/managedClusters/idj-k8stest, 2m50s elapsed)
azurerm_kubernetes_cluster.k8s: Still destroying... (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-...nerService/managedClusters/idj-k8stest, 3m0s elapsed)
azurerm_kubernetes_cluster.k8s: Still destroying... (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-...nerService/managedClusters/idj-k8stest, 3m10s elapsed)
azurerm_kubernetes_cluster.k8s: Still destroying... (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-...nerService/managedClusters/idj-k8stest, 3m20s elapsed)
azurerm_kubernetes_cluster.k8s: Still destroying... (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-...nerService/managedClusters/idj-k8stest, 3m30s elapsed)
azurerm_kubernetes_cluster.k8s: Still destroying... (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-...nerService/managedClusters/idj-k8stest, 3m40s elapsed)
azurerm_kubernetes_cluster.k8s: Still destroying... (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-...nerService/managedClusters/idj-k8stest, 3m50s elapsed)
azurerm_kubernetes_cluster.k8s: Still destroying... (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-...nerService/managedClusters/idj-k8stest, 4m0s elapsed)
azurerm_kubernetes_cluster.k8s: Still destroying... (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-...nerService/managedClusters/idj-k8stest, 4m10s elapsed)
azurerm_kubernetes_cluster.k8s: Still destroying... (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-...nerService/managedClusters/idj-k8stest, 4m20s elapsed)
azurerm_kubernetes_cluster.k8s: Still destroying... (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-...nerService/managedClusters/idj-k8stest, 4m30s elapsed)
azurerm_kubernetes_cluster.k8s: Still destroying... (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-...nerService/managedClusters/idj-k8stest, 4m40s elapsed)
azurerm_kubernetes_cluster.k8s: Still destroying... (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-...nerService/managedClusters/idj-k8stest, 4m50s elapsed)
azurerm_kubernetes_cluster.k8s: Still destroying... (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-...nerService/managedClusters/idj-k8stest, 5m0s elapsed)
azurerm_kubernetes_cluster.k8s: Still destroying... (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-...nerService/managedClusters/idj-k8stest, 5m10s elapsed)
azurerm_kubernetes_cluster.k8s: Still destroying... (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-...nerService/managedClusters/idj-k8stest, 5m20s elapsed)
azurerm_kubernetes_cluster.k8s: Still destroying... (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-...nerService/managedClusters/idj-k8stest, 5m30s elapsed)
azurerm_kubernetes_cluster.k8s: Still destroying... (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-...nerService/managedClusters/idj-k8stest, 5m40s elapsed)
azurerm_kubernetes_cluster.k8s: Still destroying... (ID: /subscriptions/d955c0ba-13dc-44cf-a29a-...nerService/managedClusters/idj-k8stest, 5m50s elapsed)
Summary
In this AKS tutorial we dug into AKS Cluster setup with AAD RBAC covering both Terraform and Azure CLI. We showed usage of admin credentials and how to tie the Cluster Admin role to a named AD account. We then explored service accounts and usage. This provides a great way to offload the management of a cluster (for instance, refer to an AD group of users using Subjects bind Group (instead of user). e.g.
$ cat rbac-aad-user.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: my-cluster-admins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: 1f5d835c-b129-41e6-b2fe-5858a5f4e41a
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: 82d900dd-a234-667e-1a3d-477ed87ee1a4
$ kubectl apply -f rbac-aad-user.yaml
clusterrolebinding.rbac.authorization.k8s.io/my-cluster-admins created