Skip to content

bug: AccessKeyInvalid while packaging cloudformation template #12933

@JosephBrooksbank

Description

@JosephBrooksbank

UPDATE: Solution

pinning awscli to 1.41.8 has solved the issue. 1.41.9 and above appear to cause this. Not related to localstack specifically. In more recent versions I see this in the debug logs:

2025-07-30 18:09:45,712 - MainThread - botocore.configprovider - DEBUG - Looking for endpoint for s3 via: environment_service

2025-07-30 18:09:45,712 - MainThread - botocore.configprovider - DEBUG - Looking for endpoint for s3 via: environment_global

2025-07-30 18:09:45,712 - MainThread - botocore.configprovider - DEBUG - Looking for endpoint for s3 via: config_service

2025-07-30 18:09:45,712 - MainThread - botocore.configprovider - DEBUG - Looking for endpoint for s3 via: config_global

2025-07-30 18:09:45,713 - MainThread - botocore.configprovider - DEBUG - No configured endpoint found.

and then it defaults to resolving the endpoint by default (real AWS). I'll close this issue.

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

Description of Issue

Something has changed recently which causes aws[local] cloudformation package to fail with

Unable to upload artifact ManifestValidator referenced by CodeUri parameter of ManifestValidator resource.
An error occurred (InvalidAccessKeyId) when calling the PutObject operation: The AWS Access Key Id you provided does not exist in our records.

I run this in a docker container. It ran fine until some new team members indicated they were having trouble, at which point I cleared my images out and recreated and now I'm also seeing the error. Other awslocal commands work fine, like awslocal s3 ls.

the exact command I'm running is

samlocal build -t /aws-inf/template.yaml;
awslocal cloudformation package --template .aws-sam/build/template.yaml --s3-bucket authz-service-lambdas --s3-prefix code --region us-east-1 --output-template packaged_template.yaml;

I've also tried

aws --endpoint-url=http://host.docker.internal:4566 cloudformation package --template .aws-sam/build/template.yaml --s3-bucket authz-service-lambdas --s3-prefix code --region
 us-east-1 --output-template packaged_template.yaml

With no avail.

Versions

aws-cli: 1.41.16
sam: 1.142.1 (was working previously)
LOCALSTACK_BUILD_VERSION: 4.6.1.dev28 (Community)

Files

Dockerfile
FROM node:20-alpine3.22
RUN apk add \
    curl \
    build-base \
    python3 \
    python3-dev \
    py3-pip \
    py3-setuptools \
    py3-wheel \
    && pip3 install --upgrade pip --break-system-packages\
    && pip3 install awscli --break-system-packages\
    && pip3 install aws-sam-cli --break-system-packages \
    && pip3 install awscli-local[ver1] --break-system-packages \
    && rm -rf /var/cache/apk/*
ENV AWS_ACCESS_KEY_ID="test"
ENV AWS_SECRET_ACCESS_KEY="test"
ENV AWS_DEFAULT_REGION="us-east-1"
ENV LOCALSTACK_HOST="host.docker.internal:4566"
COPY aws-inf/template.local.yaml /aws-inf/template.yaml
COPY lambdas /lambdas
COPY ./localstack-deploy/deploy.sh /deploy.sh
COPY ./localstack-deploy/seed-data /seed-data

RUN sed -i 's/\r$//' deploy.sh && chmod +x deploy.sh

CMD ./deploy.sh

deploy script
#!/bin/sh
while true 
 do
  # check localstack every 3 seconds until successful response
  awslocal s3 ls
  echo "waiting for localstack to be ready..."
  if [ $? -eq 0 ]; then
    num_bucket=$(awslocal s3 ls)
    break
  fi
  sleep 3
done
echo "Deploying Stack..."
sam --version
awslocal --version
awslocal s3 mb s3://authz-service-lambdas;\
samlocal build -t /aws-inf/template.yaml;\
awslocal cloudformation package --template .aws-sam/build/template.yaml --s3-bucket authz-service-lambdas --s3-prefix code --region us-east-1 --output-template packaged_template.yaml;\
awslocal cloudformation deploy --template-file packaged_template.yaml --stack-name authz-service-local --capabilities CAPABILITY_IAM --parameter-overrides EnvironmentName=local
(Relevant) Compose file
version: "3.8"

services:

localstack:
container_name: "${LOCALSTACK_DOCKER_NAME-localstack_main}"
image: localstack/localstack
ports:
- "127.0.0.1:4566:4566"
- "127.0.0.1:4510-4559:4510-4559"
environment:
- DEBUG=${DEBUG-}
- DOCKER_HOST=unix:///var/run/docker.sock
- ACTIVATE_PRO=0
volumes:
- "${LOCALSTACK_VOLUME_DIR:-./docker-data-stores/localstack-volume}:/var/lib/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"

localstack-data:
container_name: localstack-data
build:
context: .
dockerfile: ./localstack-deploy/LocalStack-Deploy.dockerfile


Any guidance would be appreciated!

Expected Behavior

cloudformation package works as expected with no InvalidAccessKeyId error

How are you starting LocalStack?

With a docker-compose file

Steps To Reproduce

How are you starting localstack (e.g., bin/localstack command, arguments, or docker-compose.yml)

docker compose up --build

Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)

samlocal build -t /aws-inf/template.yaml;\
awslocal cloudformation package --template .aws-sam/build/template.yaml --s3-bucket authz-service-lambdas --s3-prefix code --region us-east-1 --output-template packaged_template.yaml;\
awslocal cloudformation deploy --template-file packaged_template.yaml --stack-name authz-service-local --capabilities CAPABILITY_IAM --parameter-overrides EnvironmentName=local

Environment

- OS: Alpine 3.22
- LocalStack:
  LocalStack version:  4.6.1.dev28
  LocalStack Docker image sha: sha256:ed6fc67d1a82db56c4c36c4094c3d63a09e8eabacc4064818327396ed5e37680
  LocalStack build date:
  LocalStack build git hash:

Anything else?

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions