Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The globally unique names of S3 could be problematic with just the metadata of name.

You could figure out how a company names their S3 buckets. It's subtle, but you could create a bunch of typo'd variants of the buckets and sit around waiting for s3 server logs/cloudtrail to tell you when someone hits one of the objects.

When that happens, you could get the accessing AWS Account # (which isn't inherently private, but something that you wouldn't want to tell the world about), IAM user accessing object, and which object was attempted to be accessed.

Say the IAM user is a role with terribly insecure assume role policy... Or one could put an object where the misconfigured service was looking and it'd maybe get processed.

This kind of attack is preventable but I doubt most people are configuring SCPs to the level of detail you'd need to completely prevent this.



That’s why Amazon recommends the use of the expected owner parameter for S3 operations.

ISTR it’s also possible to apply an SCP that limits S3 reads and writes outside your organization. If not via an SCP then via a permission boundary at the least.


Yep, an SCP can restrict what S3 buckets you can access via IAM.

If you're using a VPC you can deploy a VPC S3 Gateway Endpoint which has a policy document on it, this will restrict which buckets the whole VPC can access no matter what their IAM policy says. This also has the benefit of blocking access using non-IAM methods, like signed URLs or public buckets.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: