March 12, 2026 · Cloud Security · 22 min read

Understanding AWS Public Resources: What's Actually Exposed and What to Fix First

If you've ever run a security scan on your AWS account, you've probably seen a wall of findings screaming about "publicly accessible" resources. RDS is public! OpenSearch is public! SQS is public!

But here's the thing. Most of those aren't actually reachable in the way you think.

By design, AWS is a collection of web services. Every service is an API, built to be accessed over the internet. That's the whole point. You call an API, you get a response. VPCs exist to carve out private network space for things like EC2 instances and databases, but the core AWS services (S3, DynamoDB, SQS, KMS, and dozens more) are accessed via public API endpoints. That's not a flaw. That's the architecture. The security comes from the fact that every request to these APIs must be signed with valid credentials. That said, for critical services like KMS, RDS, and Secrets Manager, you should architect access through VPC endpoints so traffic stays on the AWS backbone and never traverses the public internet.

So when a security scanner flags these services as "public," it's technically correct. But there's a big difference between a resource being reachable on the internet and a resource being accessible without credentials. Understanding this distinction is the key to focusing your security efforts where they actually matter.

The Misconception

When AWS says a resource is "publicly accessible," it usually means one of two things:

  1. The resource has a public IP or DNS name that resolves from the internet
  2. A resource policy includes Principal: "*"

Neither of these automatically means someone can walk in and grab your data. Most AWS services require every API request to be signed with valid IAM credentials (SigV4). Without those credentials, you get a 403 Forbidden, even if the endpoint is technically public.

So when your security tool flags DynamoDB or SQS or KMS as "public," take a breath. It's worth investigating, but it's probably not the fire you think it is.

Where the Real Risk Lives

The services you should actually worry about are the ones where no authentication is required at all, or where authentication is handled by your application code (which may or may not exist).

Internet-Facing Load Balancers (ALB / NLB)

This is the number one attack surface for most AWS workloads. When you set an ALB to internet-facing, anyone on the internet can send requests to it. AWS doesn't authenticate those requests. Your application behind the ALB is the only thing standing between the internet and your data.

If your app has an auth bug, a misconfigured endpoint, or an unauthenticated health check that leaks info, that's a real problem.

API Gateway Without an Authorizer

API Gateway endpoints are public by default. That's fine if you've configured an authorizer (IAM, Cognito, or a Lambda authorizer). But if you haven't? Every route is callable by anyone with a curl command.

This is surprisingly common. Someone spins up an API for testing, forgets to add auth, and it stays that way in production.

Lambda Function URLs with No Auth

Lambda Function URLs are a convenient way to expose a function over HTTP. But if the auth type is set to NONE, there's literally zero authentication. Anyone who discovers the URL can invoke your function.

S3 Buckets with Public Policies

This one has been in the news enough times. A bucket policy with Principal: "*" and no conditions means anyone can read (or write) your data without any credentials. AWS has added multiple layers of protection (public access blocks, account-level settings), but misconfigurations still happen.

EC2 Running a Web Application

An EC2 instance with a public IP, an open security group on port 80/443, and a web application. That's a classic setup. AWS has no role in authenticating traffic to your web server. It's all on your app.

The Medium Risk Tier

Some services are network-reachable from the internet but still require credentials, just not AWS credentials.

RDS, Aurora, and Redshift with PubliclyAccessible=true are reachable on the network, but you still need a database username and password to connect. The risk here is brute force attacks, credential stuffing, or leaked credentials. It's not great, but it's not the same as zero auth.

EKS with a public API server endpoint requires a valid kubeconfig to do anything. It's exposed, but not unauthenticated.

These are worth fixing. You generally don't want databases reachable from the internet. But they're not the same severity as a wide-open ALB.

The "Looks Scary, Isn't Really" Tier

A long list of AWS services have public endpoints but require SigV4-signed requests for every call:

When a security tool flags these, it's usually because a resource policy includes Principal: "*". But even with that policy, the request still needs to be signed with valid AWS credentials. Without them, you can't do anything.

The exception is if a policy explicitly allows anonymous access (which is rare and usually a mistake). But in the vast majority of cases, these findings are low priority.

Don't Forget Shared Snapshots

One category that often gets overlooked: shared resources. These aren't live endpoints, but they can leak data just as badly.

These are silent data leaks. No one's hitting an endpoint. They're just browsing the public snapshot catalog.

Three Patterns, One Framework

Almost every public exposure in AWS falls into one of three patterns:

Pattern 1: No auth at all. The service is internet-facing and either has no authentication or relies on your application to handle it. This is where breaches happen. ALBs, API Gateways without authorizers, Lambda Function URLs, public S3 buckets.

Pattern 2: Non-AWS credentials. The service is internet-facing but requires database passwords or service-specific auth. Risk is credential-based attacks. RDS, Redshift, EKS API server.

Pattern 3: AWS credentials required. The service has a public endpoint but every request needs SigV4. Low risk unless IAM policies are wildly misconfigured. DynamoDB, SQS, KMS, and most other AWS APIs.

Focus your energy on Pattern 1. Fix Pattern 2 when you can. Monitor Pattern 3 but don't lose sleep over it.

Finding the Dangerous Stuff First

This is where most teams get stuck. You know what's dangerous, but how do you actually find it across 20 accounts and 15 regions? Here's a practical playbook, starting with the highest-impact items.

Start with IAM Access Analyzer

IAM Access Analyzer is the most underrated security tool in AWS. It's free, and it does one thing extremely well: it finds resources in your account that are accessible from outside your account or organization.

How it works under the hood: Access Analyzer uses automated reasoning, the same formal verification technology used to prove mathematical theorems. It doesn't scan your resources periodically and check rules. It mathematically analyzes every resource policy, bucket policy, KMS key policy, Lambda policy, IAM trust policy, and SQS/SNS policy to determine whether any principal outside your zone of trust can access the resource. This means it catches edge cases that rule-based tools miss, like a policy that allows access through a combination of conditions that individually look fine but together create an opening.

What it covers:

That's a lot of surface area for a free tool.

Setting it up at the organization level is the key move most people miss. Instead of creating an analyzer in every account, you create a single organization-level analyzer in your delegated admin account. This gives you a single pane of glass across every account in your AWS Organization.

One important detail: Access Analyzer is regional. An organization analyzer in us-east-1 only analyzes resources in us-east-1 across all accounts. You need to create an analyzer in every region you use. This sounds tedious, but it's a one-time CloudFormation StackSet deployment. Push a simple template to all regions in your management or delegated admin account, and you're done.

What it won't catch: Access Analyzer is focused on resource policies. It doesn't know about network-level exposure. It won't tell you that your ALB is internet-facing, that your EC2 instance has a public IP, or that your API Gateway has no authorizer. Those are network and configuration issues, not policy issues.

Tackle the Truly Unauthenticated Services

These are the ones Access Analyzer can't help with because the exposure isn't policy-based. It's network-based or configuration-based.

Internet-facing Load Balancers. Every ALB and NLB has a Scheme attribute. It's either internet-facing or internal. In a multi-account setup, you'd use a script that assumes a role into each account, iterates through your active regions, and calls describe-load-balancers. Filter for Scheme: internet-facing. That's your list.

But finding them is the easy part. The hard question is: should this ALB be internet-facing? Some of them are legitimate. The real audit is: does every internet-facing ALB have a WAF attached? Is the application behind it properly authenticated? Are there any ALBs that were created for testing and never cleaned up?

API Gateway without authorizers. This is one of the sneakiest exposures. An API Gateway REST API or HTTP API is public by default. The only way to know if it's protected is to check whether every route has an authorizer configured.

For REST APIs, you check each method on each resource for an authorizationType. If it's NONE, that route is wide open. For HTTP APIs, you check each route's authorizationType. A single route with NONE on a public API is enough for an attacker to find and exploit.

Lambda Function URLs. For every Lambda function, check if a function URL is configured, and if so, what the AuthType is. If it's NONE, it's publicly invokable. No nuance here.

S3 buckets. Access Analyzer already covers policy-based S3 exposure. But there's a belt-and-suspenders check: verify that the S3 account-level public access block is enabled on every account. One API call per account. If it's enabled, it overrides any bucket-level misconfigurations. Enforce this with an SCP going forward.

Sweep for the Medium-Risk Tier

Once you've handled the unauthenticated services, move to the ones that are network-reachable but credential-protected.

RDS and Aurora. Check the PubliclyAccessible flag on every DB instance. If it's true, the instance has a public DNS name that resolves to its public IP. Even though database auth is still required, you don't want databases on the public internet.

OpenSearch. Check whether the domain is in a VPC or not. If it's not in a VPC, it has a public endpoint. Then check the access policy. OpenSearch domains outside a VPC with open access policies are a common finding in older accounts.

EKS. Check endpointPublicAccess on each cluster. Best practice is to disable public access and use private endpoints only, or at minimum restrict public access to specific CIDR blocks.

Check Shared Snapshots and Images

EBS snapshots. Call describe-snapshots with --owner-ids self, then check the createVolumePermissions for each. If it includes Group: all, the snapshot is public.

RDS snapshots. Call describe-db-snapshot-attributes for each snapshot. If the restore attribute includes all, it's public.

AMIs. Call describe-images with --owners self, then check launchPermission. If it includes Group: all, anyone can launch an instance from your AMI.

The Multi-Account Execution Strategy

All of the above sounds like a lot of work if you have dozens of accounts and regions. Here's how to make it manageable.

Use AWS Organizations and a delegated security account as the hub. Create a cross-account IAM role in every member account using a CloudFormation StackSet. Deploy a read-only audit role (something like SecurityAuditRole) with the SecurityAudit AWS managed policy attached. Your scripts in the security account assume this role to scan each account.

The pattern is always the same: list all accounts in the organization, for each account assume the audit role, for each active region make the relevant API calls, aggregate findings centrally.

Don't scan all 30+ regions blindly. Check which regions are actually enabled in each account using account:ListRegions. Most organizations only use 3 to 5 regions.

For the high-risk items (ALBs, API Gateways, Lambda URLs, S3), run weekly or on every change via EventBridge. For medium-risk items (RDS, EKS, snapshots), monthly is usually sufficient. For the low-risk SigV4-protected services, Access Analyzer handles it continuously.

Use Access Analyzer's Unused Access Feature

Beyond external access, Access Analyzer has a second mode: unused access analysis. This isn't free (it's priced per IAM role analyzed per month), but it's powerful for identifying IAM roles that haven't been used in 90+ days, permissions granted but never used, and access keys that are active but unused.

Why does this matter for public exposure? Because an overly permissive IAM role attached to a public-facing Lambda function or EC2 instance is a blast radius problem. If the public-facing resource gets compromised, the attacker inherits whatever permissions that role has. Unused access analysis helps you shrink that blast radius.

This is a second-pass optimization. Get the public exposure under control first, then tighten the permissions on everything that's intentionally public.

The Priority Order

  1. Enable IAM Access Analyzer (organization-level) in every active region. Free. Covers resource policies across all accounts. Do this today.
  2. Verify S3 account-level public access blocks on every account. One API call per account. Enforce with SCPs going forward.
  3. Find all internet-facing ALBs and NLBs. Check for WAF attachment and confirm each one is intentional.
  4. Audit API Gateway routes for missing authorizers. Every route with authorizationType: NONE on a public API is a finding.
  5. Find Lambda Function URLs with AuthType: NONE. Fix or justify every one.
  6. Check for public RDS/Aurora/Redshift instances. Set PubliclyAccessible to false unless there's a documented exception.
  7. Check for public EKS API endpoints. Restrict to private or specific CIDRs.
  8. Audit shared snapshots (EBS, RDS, AMI). Make them private unless intentionally shared.
  9. Review OpenSearch domains not in VPC. Migrate to VPC-based domains.
  10. Everything else is noise until the above is clean.

The Bottom Line

Not every "public" finding is a crisis. The word "public" in AWS security findings covers a huge spectrum, from "anyone on the internet can download your database" to "this endpoint exists on the internet but requires cryptographically signed credentials to use."

Your priority should be clear:

  1. Find and fix services with no authentication: ALBs, API Gateways, Lambda URLs, S3 buckets
  2. Move databases off the public internet: RDS, Redshift, ElastiCache
  3. Review resource policies: let Access Analyzer do this for you, for free
  4. Don't panic about SigV4-protected endpoints: monitor them, but they're not your biggest problem

Security is about prioritization. Focus on what's actually exposed, not what a tool says is "public."

← Back to all posts