ChatGPT解决这个技术问题 Extra ChatGPT

Listing contents of a bucket with boto3

How can I see what's inside a bucket in S3 with boto3? (i.e. do an "ls")?

Doing the following:

import boto3
s3 = boto3.resource('s3')
my_bucket = s3.Bucket('some/path/')

returns:

s3.Bucket(name='some/path/')

How do I see its contents?

use ## list_content def list_content (self, bucket_name): content = self.s3.list_objects_v2(Bucket=bucket_name) print(content) Other version is depreciated.

W
Willem van Ketwich

One way to see the contents would be:

for my_bucket_object in my_bucket.objects.all():
    print(my_bucket_object)

can i fetch the keys under particular path in bucket or with particular delimiter using boto3??
You should be able to say mybucket.objects.filter(Prefix='foo/bar') and it will only list objects with that prefix. You can also pass a Delimiter parameter.
not working with boto3 AttributeError: 'S3' object has no attribute 'objects'
@garnaat Your comment mentioning that filter method really helped me (my code ended up much simpler and faster) - thank you!
I would advise against using object as a variable name as it will shadow the global type object.
A
Amelio Vazquez-Reina

This is similar to an 'ls' but it does not take into account the prefix folder convention and will list the objects in the bucket. It's left up to the reader to filter out prefixes which are part of the Key name.

In Python 2:

from boto.s3.connection import S3Connection

conn = S3Connection() # assumes boto.cfg setup
bucket = conn.get_bucket('bucket_name')
for obj in bucket.get_all_keys():
    print(obj.key)

In Python 3:

from boto3 import client

conn = client('s3')  # again assumes boto.cfg setup, assume AWS S3
for key in conn.list_objects(Bucket='bucket_name')['Contents']:
    print(key['Key'])

If you want to use the prefix as well, you can do it like this: conn.list_objects(Bucket='bucket_name', Prefix='prefix_string')['Contents']
This only lists the first 1000 keys. From the docstring: "Returns some or all (up to 1000) of the objects in a bucket." Also, it is recommended that you use list_objects_v2 instead of list_objects (although, this also only returns the first 1000 keys).
This limitation should be dealt with using Paginators
Can you please give the boto.cfg format ?
@markonovak crashes horribly if there are none. Means I need to leave a sentinel in there.
u
urban

I'm assuming you have configured authentication separately.

import boto3
s3 = boto3.resource('s3')

my_bucket = s3.Bucket('bucket_name')

for file in my_bucket.objects.all():
    print(file.key)

S
Sean Summers

My s3 keys utility function is essentially an optimized version of @Hephaestus's answer:

import boto3


s3_paginator = boto3.client('s3').get_paginator('list_objects_v2')


def keys(bucket_name, prefix='/', delimiter='/', start_after=''):
    prefix = prefix[1:] if prefix.startswith(delimiter) else prefix
    start_after = (start_after or prefix) if prefix.endswith(delimiter) else start_after
    for page in s3_paginator.paginate(Bucket=bucket_name, Prefix=prefix, StartAfter=start_after):
        for content in page.get('Contents', ()):
            yield content['Key']

In my tests (boto3 1.9.84), it's significantly faster than the equivalent (but simpler) code:

import boto3


def keys(bucket_name, prefix='/', delimiter='/'):
    prefix = prefix[1:] if prefix.startswith(delimiter) else prefix
    bucket = boto3.resource('s3').Bucket(bucket_name)
    return (_.key for _ in bucket.objects.filter(Prefix=prefix))

As S3 guarantees UTF-8 binary sorted results, a start_after optimization has been added to the first function.


This is by far the best answer. I was just modifying @Hephaestus's answer (because it was the highest) when I scrolled down. This should be the accepted answer and should get extra points for being concise. I would add that the generator from the second code needs to be wrapped in list() to return a list of files.
@RichardD both results return generators. Many buckets I target with this code have more keys than the memory of the code executor can handle at once (eg, AWS Lambda); I prefer consuming the keys as they are generated.
This is a nice solution, and you can add Delimiter=delimiter to the s3_paginator.paginate call to avoid recursing into "subdirectories."
H
Hephaestus

In order to handle large key listings (i.e. when the directory list is greater than 1000 items), I used the following code to accumulate key values (i.e. filenames) with multiple listings (thanks to Amelio above for the first lines). Code is for python3:

    from boto3  import client
    bucket_name = "my_bucket"
    prefix      = "my_key/sub_key/lots_o_files"

    s3_conn   = client('s3')  # type: BaseClient  ## again assumes boto.cfg setup, assume AWS S3
    s3_result =  s3_conn.list_objects_v2(Bucket=bucket_name, Prefix=prefix, Delimiter = "/")

    if 'Contents' not in s3_result:
        #print(s3_result)
        return []

    file_list = []
    for key in s3_result['Contents']:
        file_list.append(key['Key'])
    print(f"List count = {len(file_list)}")

    while s3_result['IsTruncated']:
        continuation_key = s3_result['NextContinuationToken']
        s3_result = s3_conn.list_objects_v2(Bucket=bucket_name, Prefix=prefix, Delimiter="/", ContinuationToken=continuation_key)
        for key in s3_result['Contents']:
            file_list.append(key['Key'])
        print(f"List count = {len(file_list)}")
    return file_list

r
rjurney

If you want to pass the ACCESS and SECRET keys (which you should not do, because it is not secure):

from boto3.session import Session

ACCESS_KEY='your_access_key'
SECRET_KEY='your_secret_key'

session = Session(aws_access_key_id=ACCESS_KEY,
                  aws_secret_access_key=SECRET_KEY)
s3 = session.resource('s3')
your_bucket = s3.Bucket('your_bucket')

for s3_file in your_bucket.objects.all():
    print(s3_file.key)

This is less secure than having a credentials file at ~/.aws/credentials. Though it is a valid solution.
This would require committing secrets to source control. Not good.
This answer adds nothing regarding the API / mechanics of listing objects while adding a non relevant authentication method which is common for all boto resources and is a bad practice security wise
Added a disclaimer to the answer about security.
What if the keys were supplied by key/secret management system like Vault (Hashicorp) - wouldn't that be better than just placing credentials file at ~/.aws/credentials ?
i
iselim
#To print all filenames in a bucket
import boto3

s3 = boto3.client('s3')

def get_s3_keys(bucket):

    """Get a list of keys in an S3 bucket."""
    resp = s3.list_objects_v2(Bucket=bucket)
    for obj in resp['Contents']:
      files = obj['Key']
    return files

  
filename = get_s3_keys('your_bucket_name')

print(filename)

#To print all filenames in a certain directory in a bucket
import boto3

s3 = boto3.client('s3')

def get_s3_keys(bucket, prefix):

    """Get a list of keys in an S3 bucket."""
    resp = s3.list_objects_v2(Bucket=bucket, Prefix=prefix)
    for obj in resp['Contents']:
      files = obj['Key']
      print(files)
    return files

  
filename = get_s3_keys('your_bucket_name', 'folder_name/sub_folder_name/')

print(filename)

Update: The most easiest way is to use awswrangler

import awswrangler as wr
wr.s3.list_objects('s3://bucket_name')

Both "get_s3_keys" returns only last key.
This lists all the files in the bucket though; the question was how to do an ls. How would you do that..only print the files in the root
it returns last key, use this: def get_s3_keys(bucket, prefix): resp = s3.list_objects_v2(Bucket=bucket, Prefix=prefix) return [obj['Key'] for obj in resp['Contents']]
p
petezurich

A more parsimonious way, rather than iterating through via a for loop you could also just print the original object containing all files inside your S3 bucket:

session = Session(aws_access_key_id=aws_access_key_id,aws_secret_access_key=aws_secret_access_key)
s3 = session.resource('s3')
bucket = s3.Bucket('bucket_name')

files_in_s3 = bucket.objects.all() 
#you can print this iterable with print(list(files_in_s3))

@petezurich , can you please explain why such a petty edit of my answer - replacing an “a” with a capital “A” at the beginning of my answer brought down my reputation by -2 , however I reckon both you and I can agree that not only is your correction NOT Relevant at all, but actually rather petty, wouldn’t you say so? Please focus on the content rather than childish revisions , most obliged ol’boy
These were two different interactions. 1. I edited your answer which is recommended even for minor misspellings. I agree, that the boundaries between minor and trivial are ambiguous. I do not downvote any post because I see errors and I didn't in this case. I simply fix all the errors that I see.
2. I downvoted your answer because you wrote that files_in_s3is a "list object". There is no such thing in Python. It rather is an iterable and I couldn't make your code work and therefore downvoted. Than I found the error and saw your point but couldn't undo my downvote.
@petezurich no problem , understood your , point , just one thing, in Python a list IS an object because pretty much everything in python is an object , then it also follows that a list is also an iterable, but first and foremost , it’s an object! that is why I did not understand your downvote- you were down voting something that was correct and code that works. Anyway , thanks for your apology and all the best
@petezurich Everything in Python is an object. "List object" is completely acceptable.
v
vj sreenivasan
import boto3
s3 = boto3.resource('s3')

## Bucket to use
my_bucket = s3.Bucket('city-bucket')

## List objects within a given prefix
for obj in my_bucket.objects.filter(Delimiter='/', Prefix='city/'):
  print obj.key

Output:

city/pune.csv
city/goa.csv

This command includes the directory also, i.e. city/, city/pune.csv and city/goa.csv.
P
Peter Girnus

ObjectSummary:

There are two identifiers that are attached to the ObjectSummary:

bucket_name

key

boto3 S3: ObjectSummary

More on Object Keys from AWS S3 Documentation:

Object Keys: When you create an object, you specify the key name, which uniquely identifies the object in the bucket. For example, in the Amazon S3 console (see AWS Management Console), when you highlight a bucket, a list of objects in your bucket appears. These names are the object keys. The name for a key is a sequence of Unicode characters whose UTF-8 encoding is at most 1024 bytes long. The Amazon S3 data model is a flat structure: you create a bucket, and the bucket stores objects. There is no hierarchy of subbuckets or subfolders; however, you can infer logical hierarchy using key name prefixes and delimiters as the Amazon S3 console does. The Amazon S3 console supports a concept of folders. Suppose that your bucket (admin-created) has four objects with the following object keys: Development/Projects1.xls Finance/statement1.pdf Private/taxdocument.pdf s3-dg.pdf Reference: AWS S3: Object Keys

Here is some example code that demonstrates how to get the bucket name and the object key.

Example:

import boto3
from pprint import pprint

def main():

    def enumerate_s3():
        s3 = boto3.resource('s3')
        for bucket in s3.buckets.all():
             print("Name: {}".format(bucket.name))
             print("Creation Date: {}".format(bucket.creation_date))
             for object in bucket.objects.all():
                 print("Object: {}".format(object))
                 print("Object bucket_name: {}".format(object.bucket_name))
                 print("Object key: {}".format(object.key))

    enumerate_s3()


if __name__ == '__main__':
    main()

H
Herman

So you're asking for the equivalent of aws s3 ls in boto3. This would be listing all the top level folders and files. This is the closest I could get; it only lists all the top level folders. Surprising how difficult such a simple operation is.

import boto3

def s3_ls():
  s3 = boto3.resource('s3')
  bucket = s3.Bucket('example-bucket')
  result = bucket.meta.client.list_objects(Bucket=bucket.name,
                                           Delimiter='/')
  for o in result.get('CommonPrefixes'):
    print(o.get('Prefix'))

Super useful! Add Prefix= to list_objects to restrict to that prefix.
This works great! Only list the top-level object within the prefix! Thanks!
g
gench

Here is a simple function that returns you the filenames of all files or files with certain types such as 'json', 'jpg'.

def get_file_list_s3(bucket, prefix="", file_extension=None):
            """Return the list of all file paths (prefix + file name) with certain type or all
            Parameters
            ----------
            bucket: str
                The name of the bucket. For example, if your bucket is "s3://my_bucket" then it should be "my_bucket"
            prefix: str
                The full path to the the 'folder' of the files (objects). For example, if your files are in 
                s3://my_bucket/recipes/deserts then it should be "recipes/deserts". Default : ""
            file_extension: str
                The type of the files. If you want all, just leave it None. If you only want "json" files then it
                should be "json". Default: None       
            Return
            ------
            file_names: list
                The list of file names including the prefix
            """
            import boto3
            s3 = boto3.resource('s3')
            my_bucket = s3.Bucket(bucket)
            file_objs =  my_bucket.objects.filter(Prefix=prefix).all()
            file_names = [file_obj.key for file_obj in file_objs if file_extension is not None and file_obj.key.split(".")[-1] == file_extension]
            return file_names

R
Rajiv Patki

One way that I used to do this:

import boto3
s3 = boto3.resource('s3')
bucket=s3.Bucket("bucket_name")
contents = [_.key for _ in bucket.objects.all() if "subfolders/ifany/" in _.key]

M
Milean

I just did it like this, including the authentication method:

s3_client = boto3.client(
                's3',
                aws_access_key_id='access_key',
                aws_secret_access_key='access_key_secret',
                config=boto3.session.Config(signature_version='s3v4'),
                region_name='region'
            )

response = s3_client.list_objects(Bucket='bucket_name', Prefix=key)
if ('Contents' in response):
    # Object / key exists!
    return True
else:
    # Object / key DOES NOT exist!
    return False

T
Thomas

Here is the solution

import boto3

s3=boto3.resource('s3')
BUCKET_NAME = 'Your S3 Bucket Name'
allFiles = s3.Bucket(BUCKET_NAME).objects.all()
for file in allFiles:
    print(file.key)

r
ram

With little modification to @Hephaeastus 's code in one of the above comments, wrote the below method to list down folders and objects (files) in a given path. Works similar to s3 ls command.

from boto3 import session

def s3_ls(profile=None, bucket_name=None, folder_path=None):
    folders=[]
    files=[]
    result=dict()
    bucket_name = bucket_name
    prefix= folder_path
    session = boto3.Session(profile_name=profile)
    s3_conn   = session.client('s3')
    s3_result =  s3_conn.list_objects_v2(Bucket=bucket_name, Delimiter = "/", Prefix=prefix)
    if 'Contents' not in s3_result and 'CommonPrefixes' not in s3_result:
        return []

    if s3_result.get('CommonPrefixes'):
        for folder in s3_result['CommonPrefixes']:
            folders.append(folder.get('Prefix'))

    if s3_result.get('Contents'):
        for key in s3_result['Contents']:
            files.append(key['Key'])

    while s3_result['IsTruncated']:
        continuation_key = s3_result['NextContinuationToken']
        s3_result = s3_conn.list_objects_v2(Bucket=bucket_name, Delimiter="/", ContinuationToken=continuation_key, Prefix=prefix)
        if s3_result.get('CommonPrefixes'):
            for folder in s3_result['CommonPrefixes']:
                folders.append(folder.get('Prefix'))
        if s3_result.get('Contents'):
            for key in s3_result['Contents']:
                files.append(key['Key'])

    if folders:
        result['folders']=sorted(folders)
    if files:
        result['files']=sorted(files)
    return result

This lists down all objects / folders in a given path. Folder_path can be left as None by default and method will list the immediate contents of the root of the bucket.


K
KayV

It can also be done as follows:

csv_files = s3.list_objects_v2(s3_bucket_path)
    for obj in csv_files['Contents']:
        key = obj['Key']

h
hume

Using cloudpathlib

cloudpathlib provides a convenience wrapper so that you can use the simple pathlib API to interact with AWS S3 (and Azure blob storage, GCS, etc.). You can install with pip install "cloudpathlib[s3]".

Like with pathlib you can use glob or iterdir to list the contents of a directory.

Here's an example with a public AWS S3 bucket that you can copy and past to run.

from cloudpathlib import CloudPath

s3_path = CloudPath("s3://ladi/Images/FEMA_CAP/2020/70349")

# list items with glob
list(
    s3_path.glob("*")
)[:3]
#> [ S3Path('s3://ladi/Images/FEMA_CAP/2020/70349/DSC_0001_5a63d42e-27c6-448a-84f1-bfc632125b8e.jpg'),
#>   S3Path('s3://ladi/Images/FEMA_CAP/2020/70349/DSC_0002_a89f1b79-786f-4dac-9dcc-609fb1a977b1.jpg'),
#>   S3Path('s3://ladi/Images/FEMA_CAP/2020/70349/DSC_0003_02c30af6-911e-4e01-8c24-7644da2b8672.jpg')]

# list items with iterdir
list(
    s3_path.iterdir()
)[:3]
#> [ S3Path('s3://ladi/Images/FEMA_CAP/2020/70349/DSC_0001_5a63d42e-27c6-448a-84f1-bfc632125b8e.jpg'),
#>   S3Path('s3://ladi/Images/FEMA_CAP/2020/70349/DSC_0002_a89f1b79-786f-4dac-9dcc-609fb1a977b1.jpg'),
#>   S3Path('s3://ladi/Images/FEMA_CAP/2020/70349/DSC_0003_02c30af6-911e-4e01-8c24-7644da2b8672.jpg')]

Created at 2021-05-21 20:38:47 PDT by reprexlite v0.4.2


e
eugenhu

A good option may also be to run aws cli command from lambda functions

import subprocess
import logging

logger = logging.getLogger()
logger.setLevel(logging.INFO)

def run_command(command):
    command_list = command.split(' ')

    try:
        logger.info("Running shell command: \"{}\"".format(command))
        result = subprocess.run(command_list, stdout=subprocess.PIPE);
        logger.info("Command output:\n---\n{}\n---".format(result.stdout.decode('UTF-8')))
    except Exception as e:
        logger.error("Exception: {}".format(e))
        return False

    return True

def lambda_handler(event, context):
    run_command('/opt/aws s3 ls s3://bucket-name')

K
Kartik

I was stuck on this for an entire night because I just wanted to get the number of files under a subfolder but it was also returning one extra file in the content that was the subfolder itself,

After researching about it I found that this is how s3 works but I had a scenario where I unloaded the data from redshift in the following directory

s3://bucket_name/subfolder/<10 number of files>

and when I used

paginator.paginate(Bucket=price_signal_bucket_name,Prefix=new_files_folder_path+"/")

it would only return the 10 files, but when I created the folder on the s3 bucket itself then it would also return the subfolder

Conclusion

If the whole folder is uploaded to s3 then listing the only returns the files under prefix But if the fodler was created on the s3 bucket itself then listing it using boto3 client will also return the subfolder and the files