ChatGPT解决这个技术问题 Extra ChatGPT

check if a key exists in a bucket in s3 using boto3

I would like to know if a key exists in boto3. I can loop the bucket contents and check the key if it matches.

But that seems longer and an overkill. Boto3 official docs explicitly state how to do this.

May be I am missing the obvious. Can anybody point me how I can achieve this.


W
Wander Nauta

Boto 2's boto.s3.key.Key object used to have an exists method that checked if the key existed on S3 by doing a HEAD request and looking at the the result, but it seems that that no longer exists. You have to do it yourself:

import boto3
import botocore

s3 = boto3.resource('s3')

try:
    s3.Object('my-bucket', 'dootdoot.jpg').load()
except botocore.exceptions.ClientError as e:
    if e.response['Error']['Code'] == "404":
        # The object does not exist.
        ...
    else:
        # Something else has gone wrong.
        raise
else:
    # The object does exist.
    ...

load() does a HEAD request for a single key, which is fast, even if the object in question is large or you have many objects in your bucket.

Of course, you might be checking if the object exists because you're planning on using it. If that is the case, you can just forget about the load() and do a get() or download_file() directly, then handle the error case there.


Thanks for the quick reply Wander. I just need the same for boto3.
For boto3, it seems the best you can do at the moment is to call head_object to try and fetch the metadata for the key, then handle the resulting error if it doesn't exist.
Oh, the above head_bucket suggestion works only for buckets, not for objects. I withdraw my suggestion. :-)
-1; doesn't work for me. On boto3 version 1.5.26 I see e.response['Error']['Code'] having a value like "NoSuchKey", not "404". I haven't checked whether this is due to a difference in library versions or a change in the API itself since this answer was written. Either way, in my version of boto3, a shorter approach than checking e.response['Error']['Code'] is to catch only s3.meta.client.exceptions.NoSuchKey in the first place.
if you are using an s3 client (as opposed to a resource) then do s3.head_object(Bucket='my_bucket', Key='my_key') instead of s3.Object(...).load()
A
Alan W. Smith

The easiest way I found (and probably the most efficient) is this:

import boto3
from botocore.errorfactory import ClientError

s3 = boto3.client('s3')
try:
    s3.head_object(Bucket='bucket_name', Key='file_path')
except ClientError:
    # Not found
    pass

Note: You don't have to pass aws_access_key_id/aws_secret_access_key etc. if using a role or you have the keys in you .aws config, you can simply do s3 = boto3.client('s3')
I think adding this test gives you a little more confidence the object really doesn't exist, rather than some other error raising the exception - note that 'e' is the ClientError exception instance: if e.response['ResponseMetadata']['HTTPStatusCode'] == 404:
@Taylor it's a get request but with no data transfer.
ClientError is a catch all for 400, not just 404 therefore it is not robust.
@mickzer you are right. It is better to except a S3.Client.exceptions.NoSuchKey.
j
jlansey

I'm not a big fan of using exceptions for control flow. This is an alternative approach that works in boto3:

import boto3

s3 = boto3.resource('s3')
bucket = s3.Bucket('my-bucket')
key = 'dootdoot.jpg'
objs = list(bucket.objects.filter(Prefix=key))
if any([w.key == path_s3 for w in objs]):
    print("Exists!")
else:
    print("Doesn't exist")

Thanks for the update EvilPuppetMaster. Unfortunately when I checked last I didn't have list bucket access rights. Your answer is apt for my question, so I have up voted you. But I had already marked the first reply as answer long before. Thanks for your help.
Doesn't this count as a listing request (12.5x more expensive than get)? If you do this for 100 million objects, that could get a bit pricey... I have the feeling that the catching-exception method is unfortunately the best so far.
List may be 12.5x as expensive per request, but a single request can also return 100 million objects where a single get can only return one. So in your hypothetical case, it would be cheaper to fetch all 100 million with list and then compare locally, than to do 100m individual gets. Not to mention 1000x faster since you wouldn't need the http round trip for every object.
Use list_objects_v2 of S3 client and set MaxKeys to 1.
After running again with debug, looks like bucket.objects.filter(Prefix=key).limit(1) doesn't limit the actual response from S3, only the returned collection on the client side. Instead, you should bucket.objects.filter(Prefix=key, MaxKeys=1) as @FangZhang suggested above
L
Lucian Thorr

In Boto3, if you're checking for either a folder (prefix) or a file using list_objects. You can use the existence of 'Contents' in the response dict as a check for whether the object exists. It's another way to avoid the try/except catches as @EvilPuppetMaster suggests

import boto3
client = boto3.client('s3')
results = client.list_objects(Bucket='my-bucket', Prefix='dootdoot.jpg')
return 'Contents' in results

Had a problem in this. list_objects("2000") will return keys like "2000-01", "2000-02"
This is the most efficient solution as this does not require s3:GetObject permissions just the s3:ListBucket permissions
m
marvls

Assuming you just want to check if a key exists (instead of quietly over-writing it), do this check first:

import boto3

def key_exists(mykey, mybucket):
  s3_client = boto3.client('s3')
  response = s3_client.list_objects_v2(Bucket=mybucket, Prefix=mykey)
  if response:
      for obj in response['Contents']:
          if mykey == obj['Key']:
              return True
  return False

if key_exists('someprefix/myfile-abc123', 'my-bucket-name'):
    print("key exists")
else:
    print("safe to put new bucket object")
    # try:
    #     resp = s3_client.put_object(Body="Your string or file-like object",
    #                                 Bucket=mybucket,Key=mykey)
    # ...check resp success and ClientError exception for errors...

V
VinceP

You can use S3Fs, which is essentially a wrapper around boto3 that exposes typical file-system style operations:

import s3fs
s3 = s3fs.S3FileSystem()
s3.exists('myfile.txt')

Although I think this would work, the question asks about how to do this with boto3; in this case, it is practical to solve the problem without installing an additional library.
Also, s3fs is technically a mounting mechanism that treats s3 as a local directory. Along with its perks, it has many disadvantages when reading a number of files at the same time.
F
Fang Zhang

This could check both prefix and key, and fetches at most 1 key.

def prefix_exits(bucket, prefix):
    s3_client = boto3.client('s3')
    res = s3_client.list_objects_v2(Bucket=bucket, Prefix=prefix, MaxKeys=1)
    return 'Contents' in res

V
Vitaly Zdanevich

Not only client but bucket too:

import boto3
import botocore
bucket = boto3.resource('s3', region_name='eu-west-1').Bucket('my-bucket')

try:
  bucket.Object('my-file').get()
except botocore.exceptions.ClientError as ex:
  if ex.response['Error']['Code'] == 'NoSuchKey':
    print('NoSuchKey')

You may not want to get the object, but just see if it is there. You could use a method that heads the object like other examples here, such as bucket.Object(key).last_modified.
A
AshuGG

you can use Boto3 for this.

import boto3
s3 = boto3.resource('s3')
bucket = s3.Bucket('my-bucket')
objs = list(bucket.objects.filter(Prefix=key))
if(len(objs)>0):
    print("key exists!!")
else:
    print("key doesn't exist!")

Here key is the path you want to check exists or not


From a simple %timeit test this seems the fastest option
V
Vivek
import boto3
client = boto3.client('s3')
s3_key = 'Your file without bucket name e.g. abc/bcd.txt'
bucket = 'your bucket name'
content = client.head_object(Bucket=bucket,Key=s3_key)
    if content.get('ResponseMetadata',None) is not None:
        print "File exists - s3://%s/%s " %(bucket,s3_key) 
    else:
        print "File does not exist - s3://%s/%s " %(bucket,s3_key)

I like this answer, but it doesn't work if the file doesn't exist, it just throws an error and then you're stuck doing the same thing(s) as in some of the other answers.
n
nehem

Using objects.filter and checking the resultant list is the by far fastest way to check if a file exists in an S3 bucket. .

Use this concise oneliner, makes it less intrusive when you have to throw it inside an existing project without modifying much of the code.

s3_file_exists = lambda filename: bool(list(bucket.objects.filter(Prefix=filename)))

The above function assumes the bucket variable was already declared.

You can extend the lambda to support additional parameter like

s3_file_exists = lambda filename, bucket: bool(list(bucket.objects.filter(Prefix=filename)))

I think this is the best answer.
A
Andy Reagan

FWIW, here are the very simple functions that I am using

import boto3

def get_resource(config: dict={}):
    """Loads the s3 resource.

    Expects AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to be in the environment
    or in a config dictionary.
    Looks in the environment first."""

    s3 = boto3.resource('s3',
                        aws_access_key_id=os.environ.get(
                            "AWS_ACCESS_KEY_ID", config.get("AWS_ACCESS_KEY_ID")),
                        aws_secret_access_key=os.environ.get("AWS_SECRET_ACCESS_KEY", config.get("AWS_SECRET_ACCESS_KEY")))
    return s3


def get_bucket(s3, s3_uri: str):
    """Get the bucket from the resource.
    A thin wrapper, use with caution.

    Example usage:

    >> bucket = get_bucket(get_resource(), s3_uri_prod)"""
    return s3.Bucket(s3_uri)


def isfile_s3(bucket, key: str) -> bool:
    """Returns T/F whether the file exists."""
    objs = list(bucket.objects.filter(Prefix=key))
    return len(objs) == 1 and objs[0].key == key


def isdir_s3(bucket, key: str) -> bool:
    """Returns T/F whether the directory exists."""
    objs = list(bucket.objects.filter(Prefix=key))
    return len(objs) > 1

this is the only response i saw that addressed checking for existence for a 'folder' as compared to a 'file'. that is super-important for routines that need to know if a specific folder exists, not the specific files in a folder.
While this is a careful answer it is only useful if the user understand that the notion of a folder is misleading in this case. An empty 'folder' can exist in S3 inside a bucket and if so the isdir_s3 will return False took me a couple of minutes to sort that out I was thinking about editing the answer as if the expression is changed to >0 you will get the result you are expecting
A
Alkesh Mahajan

Try This simple

import boto3
s3 = boto3.resource('s3')
bucket = s3.Bucket('mybucket_name') # just Bucket name
file_name = 'A/B/filename.txt'      # full file path
obj = list(bucket.objects.filter(Prefix=file_name))
if len(obj) > 0:
    print("Exists")
else:
    print("Not Exists")

P
Peter Kahn

If you seek a key that is equivalent to a directory then you might want this approach

session = boto3.session.Session()
resource = session.resource("s3")
bucket = resource.Bucket('mybucket')

key = 'dir-like-or-file-like-key'
objects = [o for o in bucket.objects.filter(Prefix=key).limit(1)]    
has_key = len(objects) > 0

This works for a parent key or a key that equates to file or a key that does not exist. I tried the favored approach above and failed on parent keys.


V
Vitaly Zdanevich

If you have less than 1000 in a directory or bucket you can get set of them and after check if such key in this set:

files_in_dir = {d['Key'].split('/')[-1] for d in s3_client.list_objects_v2(
Bucket='mybucket',
Prefix='my/dir').get('Contents') or []}

Such code works even if my/dir is not exists.

http://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Client.list_objects_v2


佚名
S3_REGION="eu-central-1"
bucket="mybucket1"
name="objectname"

import boto3
from botocore.client import Config
client = boto3.client('s3',region_name=S3_REGION,config=Config(signature_version='s3v4'))
list = client.list_objects_v2(Bucket=bucket,Prefix=name)
for obj in list.get('Contents', []):
    if obj['Key'] == name: return True
return False

M
Mahesh Mogal

There is one simple way by which we can check if file exists or not in S3 bucket. We donot need to use exception for this

sesssion = boto3.Session(aws_access_key_id, aws_secret_access_key)
s3 = session.client('s3')

object_name = 'filename'
bucket = 'bucketname'
obj_status = s3.list_objects(Bucket = bucket, Prefix = object_name)
if obj_status.get('Contents'):
    print("File exists")
else:
    print("File does not exists")

This will be incorrect if a file that starts with object_name exists in the bucket. E.g. my_file.txt.oldversion will return a false positive if you check for my_file.txt. A bit of an edge case for most, but for something as broad as "does the file exist" that you're likely to use throughout your application probably worth taking into consideration.
V
Veedka

For boto3, ObjectSummary can be used to check if an object exists.

Contains the summary of an object stored in an Amazon S3 bucket. This object doesn't contain contain the object's full metadata or any of its contents

import boto3
from botocore.errorfactory import ClientError
def path_exists(path, bucket_name):
    """Check to see if an object exists on S3"""
    s3 = boto3.resource('s3')
    try:
        s3.ObjectSummary(bucket_name=bucket_name, key=path).load()
    except ClientError as e:
        if e.response['Error']['Code'] == "404":
            return False
        else:
            raise e
    return True

path_exists('path/to/file.html')

In ObjectSummary.load

Calls s3.Client.head_object to update the attributes of the ObjectSummary resource.

This shows that you can use ObjectSummary instead of Object if you are planning on not using get(). The load() function does not retrieve the object it only obtains the summary.


u
user 923227

I noticed that just for catching the exception using botocore.exceptions.ClientError we need to install botocore. botocore takes up 36M of disk space. This is particularly impacting if we use aws lambda functions. In place of that if we just use exception then we can skip using the extra library!

I am validating for the file extension to be '.csv'

This will not throw an exception if the bucket does not exist!

This will not throw an exception if the bucket exists but object does not exist!

This throws out an exception if the bucket is empty!

This throws out an exception if the bucket has no permissions!

The code looks like this. Please share your thoughts:

import boto3
import traceback

def download4mS3(s3bucket, s3Path, localPath):
    s3 = boto3.resource('s3')

    print('Looking for the csv data file ending with .csv in bucket: ' + s3bucket + ' path: ' + s3Path)
    if s3Path.endswith('.csv') and s3Path != '':
        try:
            s3.Bucket(s3bucket).download_file(s3Path, localPath)
        except Exception as e:
            print(e)
            print(traceback.format_exc())
            if e.response['Error']['Code'] == "404":
                print("Downloading the file from: [", s3Path, "] failed")
                exit(12)
            else:
                raise
        print("Downloading the file from: [", s3Path, "] succeeded")
    else:
        print("csv file not found in in : [", s3Path, "]")
        exit(12)

AWS says that python runtimes come with boto3 preinstalled: docs.aws.amazon.com/lambda/latest/dg/lambda-runtimes.html
R
Rush S

Here is a solution that works for me. One caveat is that I know the exact format of the key ahead of time, so I am only listing the single file

import boto3

# The s3 base class to interact with S3
class S3(object):
  def __init__(self):
    self.s3_client = boto3.client('s3')

  def check_if_object_exists(self, s3_bucket, s3_key):
    response = self.s3_client.list_objects(
      Bucket = s3_bucket,
      Prefix = s3_key
      )
    if 'ETag' in str(response):
      return True
    else:
      return False

if __name__ == '__main__':
  s3  = S3()
  if s3.check_if_object_exists(bucket, key):
    print "Found S3 object."
  else:
    print "No object found."

S
Sai

Just following the thread, can someone conclude which one is the most efficient way to check if an object exists in S3?

I think head_object might win as it just checks the metadata which is lighter than the actual object itself


Yes, head_object is the fastest way -- it is also how s3.Object('my-bucket', 'dootdoot.jpg').load() checks under the hood if the object exists. You can see this if you look at the error message of this method when it fails.
A
Alexander Truslow

Check out

bucket.get_key(
    key_name, 
    headers=None, 
    version_id=None, 
    response_headers=None, 
    validate=True
)

Check to see if a particular key exists within the bucket. This method uses a HEAD request to check for the existence of the key. Returns: An instance of a Key object or None

from Boto S3 Docs

You can just call bucket.get_key(keyname) and check if the returned object is None.


This doesn't work with boto3, as requested by the OP
There are two versions of the AWS boto library. This answer doesn't work with the version that was requested by the question.
It's sure not a correct answer for OP, but it helps me because I need to use boto v2. That is why I removed a negative vote.
i
isambitd

It's really simple with get() method

import botocore
from boto3.session import Session
session = Session(aws_access_key_id='AWS_ACCESS_KEY',
                aws_secret_access_key='AWS_SECRET_ACCESS_KEY')
s3 = session.resource('s3')
bucket_s3 = s3.Bucket('bucket_name')

def not_exist(file_key):
    try:
        file_details = bucket_s3.Object(file_key).get()
        # print(file_details) # This line prints the file details
        return False
    except botocore.exceptions.ClientError as e:
        if e.response['Error']['Code'] == "NoSuchKey": # or you can check with e.reponse['HTTPStatusCode'] == '404'
            return True
        return False # For any other error it's hard to determine whether it exists or not. so based on the requirement feel free to change it to True/ False / raise Exception

print(not_exist('hello_world.txt')) 

Not robust, exception could be thrown for many reasons e.g. HTTP 500 and this code would assume a 404.
But we need info about, whether the file is accessible or not. It it exists and cannot be accessible then it is equivalent to not exist. right?
@mickzer check the changes now.
To reply to you previous comment, No, the behavior, on a HTTP 500 might be to retry, a 401/403 to fix auth etc. Its important to check for the actual error code.
A
Ahsin Shabbir

You can use awswrangler to do it in 1 line.

awswrangler.s3.does_object_exist(path_of_object_to_check)

https://aws-data-wrangler.readthedocs.io/en/stable/stubs/awswrangler.s3.does_object_exist.html

The does_object_exist method uses the head_object method of the s3 client and checks if there is a ClientError raised. If the error code is 404 than False is returned.