AWS EKS You must be logged in to the server (Unauthorized)
Encountering the "You must be logged in to the server (Unauthorized)" error when accessing AWS EKS clusters means your
kubectlclient lacks proper authentication or authorization to interact with the cluster API, and this guide provides practical steps to resolve it.
What This Error Means
This error message, "You must be logged in to the server (Unauthorized)", is a direct indication that your kubectl client cannot successfully authenticate with your Amazon Elastic Kubernetes Service (EKS) cluster's API server. When you run a kubectl command, it attempts to connect and present credentials to the EKS control plane. If the API server responds with an HTTP 401 Unauthorized status, it means the identity you're presenting is either invalid, expired, or simply not recognized by the cluster's authentication mechanism.
It's crucial to understand that this isn't typically a network connectivity issue (like "connection refused"). Instead, it confirms that kubectl reached the EKS API server, but your attempt to prove who you are (authentication) or what you're allowed to do (authorization) failed. In my experience, this is one of the most common hurdles new users face, and even seasoned engineers sometimes stumble upon it after credentials expire or contexts switch.
Why It Happens
At its core, kubectl authentication with AWS EKS clusters relies on AWS Identity and Access Management (IAM). Unlike a standard Kubernetes cluster where you might use client certificates or service account tokens, EKS integrates with AWS IAM to manage access. Here's the simplified flow:
- You run a
kubectlcommand (e.g.,kubectl get pods). kubectlneeds an authentication token for the EKS API server. It uses anexec-plugin(typically provided by the AWS CLI v2) to generate a temporary, signed token from AWS IAM.- This token is then presented to the EKS API server.
- The EKS API server validates the token with AWS IAM.
- If valid, the API server maps the underlying IAM principal (user or role) to a Kubernetes user and groups, which are then checked against Kubernetes Role-Based Access Control (RBAC) policies and, crucially, the
aws-authConfigMap within thekube-systemnamespace.
The "Unauthorized" error typically occurs when one of these steps breaks:
* Token Generation Failure: The exec-plugin can't generate a valid token because your local AWS credentials are missing, incorrect, or expired.
* IAM Identity Mismatch: The IAM user or role used to generate the token is not mapped within the EKS cluster's aws-auth ConfigMap to a Kubernetes user/group, or if it is, it lacks the necessary Kubernetes RBAC permissions.
* Incorrect Context: Your kubectl is pointing to the wrong cluster or an invalid configuration.
Common Causes
Let's break down the most frequent culprits leading to this "Unauthorized" message:
- Expired or Incorrect AWS Credentials: This is by far the leading cause. AWS temporary credentials (e.g., from an
aws sso loginsession, an assumed role, or a CI/CD pipeline) have a limited lifespan. If they expire and you haven't refreshed them, theawsCLI can no longer generate a valid EKS authentication token. Similarly, if your~/.aws/credentialsfile is misconfigured or you're using the wrong AWS profile, you'll hit this wall. kubeconfigis Out-of-Date or Incorrect: Your localkubeconfigfile (typically~/.kube/config) tellskubectlhow to connect to your EKS cluster. If this file hasn't been updated recently with the correct EKS cluster details, or if it's pointing to a cluster that no longer exists or has had its endpoint changed, you'll face this issue. Often, I see developers forget to runaws eks update-kubeconfigafter a cluster is created or after switching AWS regions/accounts.- IAM User/Role Not Mapped in
aws-authConfigMap: The EKS cluster'saws-authConfigMap (residing in thekube-systemnamespace) is the bridge between AWS IAM identities and Kubernetes RBAC. If the specific IAM user or role you're using isn't listed in this ConfigMap, or if it's mapped to a Kubernetes group without sufficient permissions (e.g., notsystem:mastersor an equivalent custom role), you'll be unauthorized. The IAM identity that created the EKS cluster is automatically grantedsystem:mastersaccess, but subsequent users/roles need explicit mapping. - Incorrect AWS Region: Your AWS CLI configuration, or the region specified when updating your
kubeconfig, might not match the region where your EKS cluster resides. Ifkubectltries to generate a token for a cluster inus-east-1but your CLI is configured forus-west-2, it will fail. aws CLIVersion or Missingexec-plugin: Modern EKS authentication relies on theawsCLI (v2) acting askubectl'sexec-plugin. If you have an older version of the AWS CLI or it's not correctly installed/configured in your PATH,kubectlwon't be able to call it to generate the necessary token. While older setups usedaws-iam-authenticator, the current standard integrates this directly into the AWS CLI.- Clock Skew: A less common but important cause. Significant time differences between your client machine and AWS servers can invalidate temporary authentication tokens, leading to authorization failures.
Step-by-Step Fix
Let's walk through the troubleshooting steps to get you back into your EKS cluster.
-
Verify Your AWS CLI Configuration and Credentials
First, confirm that your AWS CLI is correctly configured and has valid credentials for the AWS account where your EKS cluster lives.bash aws configure list aws sts get-caller-identityThe
aws sts get-caller-identitycommand will show you the IAM user or role currently configured for your AWS CLI. Ensure this is the identity you expect to use.
If you're using AWS SSO, make sure your session is active:
bash aws sso login
If your credentials have expired, orget-caller-identityfails, you need to refresh them. This might involve setting environment variables (AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY,AWS_SESSION_TOKEN) or re-logging in via SSO. -
Update Your
kubeconfigfor EKS
This is often the magical fix. Theaws eks update-kubeconfigcommand fetches the latest cluster endpoint and certificate data, and crucially, configures theexec-pluginin yourkubeconfigto use theawsCLI for generating EKS authentication tokens.Always specify the cluster name and region:
bash aws eks update-kubeconfig --name <your-cluster-name> --region <your-aws-region>
For example:
bash aws eks update-kubeconfig --name my-production-cluster --region us-east-1
If you manage multiple clusters or profiles, you might want to add an--aliasto create a more descriptive context name or use--profileto specify a particular AWS profile from~/.aws/credentials. -
Check Your
kubectlContext
After updating yourkubeconfig, verify thatkubectlis now pointing to the correct context.bash kubectl config current-context kubectl config get-contexts
If the current context isn't the one you expect, switch to the correct one:
bash kubectl config use-context <name-from-get-contexts> -
Verify IAM Permissions and
aws-authConfigMap
If the previous steps didn't resolve the issue, it's highly likely a problem with how your IAM identity is mapped within the EKS cluster. The user or role you are using must be listed in theaws-authConfigMap.This step requires an already authorized user (e.g., the original cluster creator or someone with
system:mastersaccess) to execute. If you are the original creator and still hitting "Unauthorized," re-check steps 1-3.First, inspect the
aws-authConfigMap:
bash kubectl get configmap aws-auth -n kube-system -o yaml
Look formapUsersandmapRolessections. Youraws sts get-caller-identityoutput (from step 1) should correspond to an entry here.Example
mapUsersentry:
yaml apiVersion: v1 data: mapUsers: | - userarn: arn:aws:iam::123456789012:user/dev-user username: dev-user groups: - system:masters kind: ConfigMap metadata: name: aws-auth namespace: kube-systemIf your IAM user/role is missing or incorrectly configured, the authorized user can add it. The easiest way for this is often using
eksctl(if you're using it to manage your clusters) or directly editing the ConfigMap (with extreme caution).Using
eksctlto add an IAM user (recommended ifeksctlis used):
bash eksctl create iamidentitymapping \ --cluster <your-cluster-name> \ --region <your-aws-region> \ --arn arn:aws:iam::<your-account-id>:user/your-iam-username \ --username your-kubernetes-username \ --group system:masters # Or a more restrictive group
Usingeksctlto add an IAM role:
bash eksctl create iamidentitymapping \ --cluster <your-cluster-name> \ --region <your-aws-region> \ --arn arn:aws:iam::<your-account-id>:role/your-iam-role-name \ --username your-kubernetes-username \ --group system:masters
Replace placeholders like<your-cluster-name>,<your-aws-region>,<your-account-id>,your-iam-username,your-iam-role-name, andyour-kubernetes-username. Remember,system:mastersgrants full administrative access. For production, I generally recommend mapping to more specific RBAC roles. -
Ensure
aws CLI(v2) is Installed and Up-to-Date
Theexec-pluginfor EKS authentication is now integrated into the AWS CLI version 2. If you're using an older version or it's not correctly installed, the token generation will fail.
Check your version:
bash aws --version
Ensure it's at leastaws-cli/2.x.x. If not, upgrade it.
Code Examples
Here are some ready-to-use code snippets for common tasks:
1. Refreshing your kubeconfig and setting context:
# Replace with your cluster name and region
EKS_CLUSTER_NAME="my-prod-cluster"
AWS_REGION="us-west-2"
# Update your kubeconfig. This will add/update a context in ~/.kube/config
aws eks update-kubeconfig --name "${EKS_CLUSTER_NAME}" --region "${AWS_REGION}"
# Optionally, if you have multiple profiles configured in ~/.aws/credentials
# aws eks update-kubeconfig --name "${EKS_CLUSTER_NAME}" --region "${AWS_REGION}" --profile my-dev-profile
# Verify current context (should now be your EKS cluster)
kubectl config current-context
# Test access
kubectl get nodes
2. Checking your active AWS IAM identity:
aws sts get-caller-identity
3. Inspecting the EKS aws-auth ConfigMap:
(Requires an authorized user to run this command successfully)
kubectl get configmap aws-auth -n kube-system -o yaml
4. Adding an IAM user mapping to aws-auth using eksctl:
(Requires eksctl to be installed and an authorized user)
# Replace with your details
EKS_CLUSTER_NAME="my-prod-cluster"
AWS_REGION="us-west-2"
IAM_USER_ARN="arn:aws:iam::123456789012:user/dev-user-bob"
K8S_USERNAME="bob" # This is the Kubernetes username that will be mapped
K8S_GROUPS="system:masters" # Or a more specific group like 'developers'
eksctl create iamidentitymapping \
--cluster "${EKS_CLUSTER_NAME}" \
--region "${AWS_REGION}" \
--arn "${IAM_USER_ARN}" \
--username "${K8S_USERNAME}" \
--group "${K8S_GROUPS}"
# After adding, dev-user-bob should be able to run `aws eks update-kubeconfig` and then `kubectl get nodes`
Environment-Specific Notes
The "Unauthorized" error can manifest differently or require specific considerations based on your environment.
-
Local Development:
- Multiple AWS Profiles: I've seen this frequently. Ensure you are explicitly using the correct AWS profile via
export AWS_PROFILE=my-profileor by adding--profile my-profileto youraws eks update-kubeconfigcommand. Your~/.aws/credentialsand~/.aws/configfiles are critical here. - Expired MFA: If your IAM user requires MFA, ensure your SSO session or temporary credentials obtained via
aws sts get-session-token(with MFA) are fresh.
- Multiple AWS Profiles: I've seen this frequently. Ensure you are explicitly using the correct AWS profile via
-
CI/CD Pipelines (e.g., Jenkins, GitLab CI, GitHub Actions):
- IAM Roles for Service Accounts (IRSA): Best practice is to use IRSA or a dedicated IAM role for the CI/CD agent. Ensure this specific role has
eks:DescribeClusterandeks:UpdateKubeconfigpermissions and, most importantly, is mapped in the EKS cluster'saws-authConfigMap. - Temporary Credentials: Pipelines often use temporary credentials. Verify that the credentials obtained by the pipeline's execution role haven't expired before
kubectlcommands are run. Time differences between the CI/CD agent and AWS services can also cause issues.
- IAM Roles for Service Accounts (IRSA): Best practice is to use IRSA or a dedicated IAM role for the CI/CD agent. Ensure this specific role has
-
Docker Containers:
- Credential Passing: If you're running
kubectlinside a Docker container, you must ensure AWS credentials are correctly passed into the container. This can be via environment variables (AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY,AWS_SESSION_TOKEN) or by mounting the~/.awsdirectory into the container. Without these, theawsCLI inside the container cannot generate the EKS token.
- Credential Passing: If you're running
-
EC2 Instances (IAM Roles):
- If you're running
kubectlon an EC2 instance, it will automatically use the IAM role attached to the instance profile. Ensure this instance role has the necessary permissions (eks:DescribeClusterandeks:UpdateKubeconfig) and is mapped in the EKS cluster'saws-authConfigMap. This is a very clean way to manage access, but it still requires theaws-authmapping.
- If you're running
-
Multi-Account Setups:
- When accessing an EKS cluster in a different AWS account, you'll typically assume a role in the target account. Ensure your
kubeconfigupdate command (aws eks update-kubeconfig) is configured to use the assumed role's credentials (e.g., via a profile that assumes a role). The assumed role itself must be mapped in the target cluster'saws-authConfigMap.
- When accessing an EKS cluster in a different AWS account, you'll typically assume a role in the target account. Ensure your
Frequently Asked Questions
Q: Why does aws sts get-caller-identity work, but kubectl get nodes still fails with "Unauthorized"?
A: aws sts get-caller-identity successfully verifies your local AWS credentials. This means you can talk to the AWS API. However, kubectl get nodes requires two things: valid AWS credentials to generate an EKS token and for the IAM identity associated with those credentials to be explicitly mapped to a Kubernetes user/group within the EKS cluster's aws-auth ConfigMap, with sufficient RBAC permissions. Your AWS credentials might be perfect, but the EKS cluster just doesn't know who you are in its Kubernetes context.
Q: I'm using eksctl to manage my clusters. Is aws eks update-kubeconfig still necessary?
A: eksctl typically updates your kubeconfig automatically when you create or interact with clusters (e.g., eksctl get clusters or eksctl create cluster). However, if you're experiencing this "Unauthorized" error, explicitly running aws eks update-kubeconfig --name <cluster-name> --region <region> can sometimes resolve issues by forcing a refresh and ensuring the correct exec-plugin configuration is in place. It's a good first diagnostic step regardless of whether you primarily use eksctl.
Q: Can I use an IAM user without granting them system:masters access?
A: Absolutely, and it's highly recommended for production environments following the principle of least privilege. Instead of system:masters, you can map IAM users or roles to custom Kubernetes RBAC roles. First, define your custom Role or ClusterRole and RoleBinding or ClusterRoleBinding within Kubernetes, then map the IAM identity in aws-auth to a Kubernetes group that is bound to your custom role. This allows fine-grained control over what specific users or roles can do within the cluster.
Q: My kubeconfig keeps getting overwritten or becomes messy with multiple clusters. How do I manage this?
A: aws eks update-kubeconfig by default adds or updates the context in ~/.kube/config. To manage multiple clusters more cleanly:
1. Use the --alias flag: aws eks update-kubeconfig --name clusterA --region us-east-1 --alias my-dev-clusterA and aws eks update-kubeconfig --name clusterB --region us-west-2 --alias my-prod-clusterB. This creates distinct context names.
2. Use separate kubeconfig files: Generate kubeconfig files into different locations (e.g., aws eks update-kubeconfig --name my-cluster --region us-east-1 --kubeconfig ~/.kube/config-my-cluster). Then, specify which file to use with the KUBECONFIG environment variable (e.g., KUBECONFIG=~/.kube/config-my-cluster kubectl get nodes) or by merging them.
Related Errors
(none)