Fix the error HTTP 403: Access Denied from Amazon S3
The problem

If you have two AWS account one called AliceAWS with account id 0123456789 and another called BobAWS with account id 9876543210.
In the Alice account, there is one S3 bucket called alicebucket, this bucket has a policy that allows receiving objects from BobAWS account for example
{
"Version": "2008-10-17",
"Id": "S3-Console-Replication-Policy",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::9876543210:root"
},
"Action": [
"s3:PutObject",
"s3:ListBucket",
"s3:Get*",
"s3:GetBucketVersioning",
"s3:PutBucketVersioning",
"s3:ReplicateObject",
"s3:ReplicateDelete",
"s3:ObjectOwnerOverrideToBucketOwner"
],
"Resource": [
"arn:aws:s3:::alicebucket",
"arn:aws:s3:::alicebucket/*"
]
}
]
}
If a Bob IAM user run the copy without the option
--acl bucket-owner-full-control
for example
aws s3 cp test.jpg s3://alicebucket
When you try to recover the file from the AliceAWS web console you will get an error like this

The problem is well explained by the support in this page How do I troubleshoot the error HTTP 403: Access Denied from Amazon S3?
I have to resolve this kind of error for buckets with thousands of objects, so I created two scripts to automate the process.
Of course, if you don’t have yet started the copy or you can delete the bucket content and synchronize/copy again it is better to use this command so you don’t have any issue
aws s3 sync s3://bobbucket/ s3://alicebucket --acl bucket-owner-full-control
The solution
Find the objects that belong to another account
First of all, we need to find the s3 objects with the potential 403 problem and write it in a text file. The script scrapes the bucket and finds the objects that don’t have the same owner the buckets in the account. This doesn’t mean that they have for sure the issue but potentially they can have it.
You need to run this script with AliceAWS account credentials, s3 read-only policy is fine. Only replace in the second line your bucket name instead of alicebucket
import boto3
buckettocheck='alicebucket'
client = boto3.client('s3')
response = client.list_buckets()
bucketid=response['Owner']['ID']
print bucketid
f=open("otheraccount.txt","w+")response = client.list_objects(
Bucket=buckettocheck,
MaxKeys=3,
Delimiter='giuseppe'
)i=0
witherrors=0
while response['IsTruncated']:
for obj in response['Contents']:
i+=1
if bucketid != obj['Owner']['ID']:
#print obj['Key']
witherrors+=1
f.write(obj['Key']+'\n')
print 'total objects checked ' + str(i)
print 'total objects with errors ' + str(witherrors)response = client.list_objects(
Bucket=buckettocheck,
MaxKeys=1000,
Delimiter='giuseppe',
Marker=response['NextMarker']
)f.close()
It wasn’t completely clear to me what is the use of Delimiter means from the official documentation but it is mandatory to have back the NextMarker field.
In the end, you will have in the file otheraccount.txt a list of objects keys with potential problems and you can fix with a second python script.
If you want to know how many objects you have to fix it the result is in the first number
cat otheraccount.txt | wc
7783 7783 245653
in my case is 7783
Put the right ACL fix the permission
import boto3
buckettocheck='alicebucket'
client = boto3.client('s3')
file = open("otheraccount.txt", "r")
i=0
skipped=0
for line in file:
try:
i+=1
print i
key=line.encode('utf-8') #if you don't do this you can have erros for some type of chars
key=key[:-1] #this is to remove the cariage return char to the string
print key
response = client.put_object_acl(
ACL='bucket-owner-full-control',
Bucket=buckettocheck,
Key=key,
)
except:
skipped+=1
print 'skipped:'+str(skipped)print 'total file analized ' str(i)
print 'total file skipped ' str(skipped)
file.close()
After this permission fix if you run again the first script the result doesn’t change because that continue to check the owner and not the error but you will have access to all objects.
I have errors for some Cyrillic characters and for this reason I added the exception management so if you know how to fix this put it in the comment.
The inverse approach
What we saw until now doesn’t change the fact that the owner of an object is always that in the Alice S3 bucket is full of object with Owner ID Bob account.
Owner ID isn’t modifiable the only solution would be download and copy again and we don’t want to do for several obvious reasons if we have many objects.
The real solution is to invert the approach and move from a push strategy

Bob copy objects in Alice bucket to a pull one, Alice copy object from Bob in her bucket.

Of course to do that you need to change the permissions and there are some case where you cannot do that but it is something to keep in mind.