ChatGPT解决这个技术问题 Extra ChatGPT

Downloading an entire S3 bucket?

I noticed that there does not seem to be an option to download an entire s3 bucket from the AWS Management Console.

Is there an easy way to grab everything in one of my buckets? I was thinking about making the root folder public, using wget to grab it all, and then making it private again but I don't know if there's an easier way.

As many people here said, aws s3 sync is the best. But nobody pointed out a powerful option: dryrun. This option allows you to see what would be downloaded/uploaded from/to s3 when you are using sync. This is really helpful when you don't want to overwrite content either in your local or in a s3 bucket. This is how is used: aws s3 sync <source> <destination> --dryrun I used it all the time before pushing new content to a bucket in order to not upload undesired changes.
Here's a quick video showing aws s3 sync in practice: youtube.com/watch?v=J2aZodwPeQk
See 2021/09 complete answer: stackoverflow.com/a/68981037/8718377

t
the Tin Man

AWS CLI

See the "AWS CLI Command Reference" for more information.

AWS recently released their Command Line Tools, which work much like boto and can be installed using

sudo easy_install awscli

or

sudo pip install awscli

Once installed, you can then simply run:

aws s3 sync s3://<source_bucket> <local_destination>

For example:

aws s3 sync s3://mybucket .

will download all the objects in mybucket to the current directory.

And will output:

download: s3://mybucket/test.txt to test.txt
download: s3://mybucket/test2.txt to test2.txt

This will download all of your files using a one-way sync. It will not delete any existing files in your current directory unless you specify --delete, and it won't change or delete any files on S3.

You can also do S3 bucket to S3 bucket, or local to S3 bucket sync.

Check out the documentation and other examples.

Whereas the above example is how to download a full bucket, you can also download a folder recursively by performing

aws s3 cp s3://BUCKETNAME/PATH/TO/FOLDER LocalFolderName --recursive

This will instruct the CLI to download all files and folder keys recursively within the PATH/TO/FOLDER directory within the BUCKETNAME bucket.


First run aws configure and add your access key and secret access key which can be found here.
Go here for the windows installer aws.amazon.com/cli. It picks up access key id from environment variable "AWS_ACCESS_KEY_ID" and your secret key from "AWS_SECRET_ACCESS_KEY".
I've tried s3cmd and Cyberduck, but for me awscli was by far the fastest way to download ~70.000 files from my bucket.
Please note that while the question asked about download only, I believe this command will do a 2-way sync between your directory and S3. If you're not trying to upload anything, make sure the current directory is empty.
@JesseCrossen That aws s3 sync command will not upload anything, but it will delete files locally if they don't exist on S3. See the documentation.
t
the Tin Man

You can use s3cmd to download your bucket:

s3cmd --configure
s3cmd sync s3://bucketnamehere/folder /destination/folder

There is another tool you can use called rclone. This is a code sample in the Rclone documentation:

rclone sync /home/local/directory remote:bucket

This is quite slow. Especially if you attempt to use it incrementally. Is there a solution that is multi-threaded so it can saturate the bandwidth?
the solutions below this are better, more standard and open to more platforms
This does not work for requester pays buckets (see arxiv.org/help/bulk_data_s3) :-(
how can I use your solution if I have to perform pattern matching for downloading? My question: stackoverflow.com/questions/25086722/…
t
the Tin Man

I've used a few different methods to copy Amazon S3 data to a local machine, including s3cmd, and by far the easiest is Cyberduck.

All you need to do is enter your Amazon credentials and use the simple interface to download, upload, sync any of your buckets, folders or files.

https://i.stack.imgur.com/6WUiA.png


Works great! Available for Windows and Mac (GPL).
Cyberduck also makes it easy to download public files anonymously - s3cmd seems to require credentials
Works great with Transmit too.
CLI was giving me an error when i was trying to save my bucket, this worked perfectly!
Oh that came unexpected. I used cyberduck earlier for FTP, but have never expected it to have S3 connectivity. Thanks for a great hint!
t
the Tin Man

You've many options to do that, but the best one is using the AWS CLI.

Here's a walk-through:

Download and install AWS CLI in your machine: Install the AWS CLI using the MSI Installer (Windows). Install the AWS CLI using the Bundled Installer for Linux, OS X, or Unix. Configure AWS CLI: Make sure you input valid access and secret keys, which you received when you created the account. Sync the S3 bucket using: aws s3 sync s3://yourbucket /local/path In the above command, replace the following fields: yourbucket >> your S3 bucket that you want to download. /local/path >> path in your local system where you want to download all the files.


I used this instead of cyberduck, because cyberduck needs to "prepare" files before it starts downloading. For large amounts of files that seemed to take ages and I couldn't find information on what "preparing" actually does. CLI started downloading instantly
make sure you have that s3:// prefix in bucket name!!! With aws s3 ls you don't need that s3:// prefix but you need for cp command.
t
the Tin Man

To download using AWS S3 CLI:

aws s3 cp s3://WholeBucket LocalFolder --recursive
aws s3 cp s3://Bucket/Folder LocalFolder --recursive

To download using code, use the AWS SDK.

To download using GUI, use Cyberduck.


How to ignore some files or folder?
@Nabin you can use --include & --exclude with wildcard to exclude some file or folder, like this: aws s3 cp s3://my-bucket-name ./local-folder --recursive --include "*" --exclude "excludeFolder/*" --exclude "includeFolder/excludeFile.txt"
t
the Tin Man

The answer by @Layke is good, but if you have a ton of data and don't want to wait forever, you should read "AWS CLI S3 Configuration".

The following commands will tell the AWS CLI to use 1,000 threads to execute jobs (each a small file or one part of a multipart copy) and look ahead 100,000 jobs:

aws configure set default.s3.max_concurrent_requests 1000
aws configure set default.s3.max_queue_size 100000

After running these, you can use the simple sync command:

aws s3 sync s3://source-bucket/source-path s3://destination-bucket/destination-path

or

aws s3 sync s3://source-bucket/source-path c:\my\local\data\path

On a system with CPU 4 cores and 16GB RAM, for cases like mine (3-50GB files) the sync/copy speed went from about 9.5MiB/s to 700+MiB/s, a speed increase of 70x over the default configuration.


this is the real answer. just tested it, from ec2 it transferred about 2.3GB/min. without the concurrent options about 1GB/min. lifesaver.
This is great! Another tip: for configuring these values for a non-default profile, don't simply replace default with profile-name. Instead use this: aws configure set s3.max_concurrent_requests 1000 --profile profile-name.
t
the Tin Man

If you use Visual Studio, download "AWS Toolkit for Visual Studio".

After installed, go to Visual Studio - AWS Explorer - S3 - Your bucket - Double click

In the window you will be able to select all files. Right click and download files.


N
Najathi

100% works for me, i have download all files from aws s3 backet.

install aws cli (and select your operating system , follow the steps) https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html check aws version: aws --version

https://i.stack.imgur.com/Lnjoq.jpg

run config command: aws configure

https://i.stack.imgur.com/NyQvd.jpg

aws s3 cp s3://yourbucketname your\local\path --recursive

Eg (Windows OS): aws s3 cp s3://yourbucketname C:\aws-s3-backup\project-name --recursive

https://i.stack.imgur.com/wO2xk.jpg

check out this link: https://www.edureka.co/community/32955/how-to-download-an-entire-bucket-from-s3-to-local-folder


thanks, will it maintain the folder structure inside the bucket?
Thanks! Worked like a charm.
t
the Tin Man

For Windows, S3 Browser is the easiest way I have found. It is excellent software, and it is free for non-commercial use.


I just tried the "Download All Files to..." option (which I presume is equivalent to the "download entire s3 bucket" and it said I need to the Pro version.
Update: But I was able to download an entire folder within the bucket which was sufficient for my needs...
yeah the free version is pretty limited, you can select all, and download, but limited to only 2 simultaneous transfers
Was looking for a windows simple version after getting some python3 support error on Ubuntu 17.1 and s3cmd, this worked well.
j
jedierikb

Use this command with the AWS CLI:

aws s3 cp s3://bucketname . --recursive

A
Abdullah Khawer

Another option that could help some OS X users is Transmit.

It's an FTP program that also lets you connect to your S3 files. And, it has an option to mount any FTP or S3 storage as a folder in the Finder, but it's only for a limited time.


t
the Tin Man

I've done a bit of development for S3 and I have not found a simple way to download a whole bucket.

If you want to code in Java the jets3t lib is easy to use to create a list of buckets and iterate over that list to download them.

First, get a public private key set from the AWS management consule so you can create an S3service object:

AWSCredentials awsCredentials = new AWSCredentials(YourAccessKey, YourAwsSecretKey);
s3Service = new RestS3Service(awsCredentials);

Then, get an array of your buckets objects:

S3Object[] objects = s3Service.listObjects(YourBucketNameString);

Finally, iterate over that array to download the objects one at a time with:

S3Object obj = s3Service.getObject(bucket, fileName);
            file = obj.getDataInputStream();

I put the connection code in a threadsafe singleton. The necessary try/catch syntax has been omitted for obvious reasons.

If you'd rather code in Python you could use Boto instead.

After looking around BucketExplorer, "Downloading the whole bucket" may do what you want.


Unless you need a Java solution use the aws cli answer above.
create a new folder and so the same
H
Harsh Manvar

AWS sdk API will only best option for upload entire folder and repo to s3 and download entire bucket of s3 to locally.

For uploading whole folder to s3

aws s3 sync . s3://BucketName

for download whole s3 bucket locally

aws s3 sync s3://BucketName . 

you can also assign path As like BucketName/Path for particular folder in s3 to download


D
DiligentKarma

You can do this with https://github.com/minio/mc :

mc cp -r https://s3-us-west-2.amazonaws.com/bucketName/ localdir

mc also supports sessions, resumable downloads, uploads and many more. mc supports Linux, OS X and Windows operating systems. Written in Golang and released under Apache Version 2.0.


M
Muzammil

If you only want to download the bucket from AWS, first install the AWS CLI in your machine. In terminal change the directory to where you want to download the files and run this command.

aws s3 sync s3://bucket-name .

If you also want to sync the both local and s3 directories (in case you added some files in local folder), run this command:

aws s3 sync . s3://bucket-name

b
bsplosion

To add another GUI option, we use WinSCP's S3 functionality. It's very easy to connect, only requiring your access key and secret key in the UI. You can then browse and download whatever files you require from any accessible buckets, including recursive downloads of nested folders.

Since it can be a challenge to clear new software through security and WinSCP is fairly prevalent, it can be really beneficial to just use it rather than try to install a more specialized utility.


s
singh30

AWS CLI is the best option to download an entire S3 bucket locally.

Install AWS CLI. Configure AWS CLI for using default security credentials and default AWS Region. To download the entire S3 bucket use command aws s3 sync s3://yourbucketname localpath

Reference to use AWS cli for different AWS services: https://docs.aws.amazon.com/cli/latest/reference/


A
Abdullah Khawer

If you use Firefox with S3Fox, that DOES let you select all files (shift-select first and last) and right-click and download all.

I've done it with 500+ files without any problem.


This does not work for subfolders within a bucket, even if the "pseudo folders" were created in the AWS console. (As of the writing of this comment)
Confirmed not working, I have about 12k top-level keys = subfolders), S3Fox does not even start up. Also insist on the permission to list all buckets!
A
Abdullah Khawer

When in Windows, my preferred GUI tool for this is CloudBerry Explorer Freeware for Amazon S3. It has a fairly polished file explorer and FTP-like interface.


L
Lukasz Czerwinski

If you have only files there (no subdirectories) a quick solution is to select all the files (click on the first, Shift+click on the last) and hit Enter or right click and select Open. For most of the data files this will download them straight to your computer.


p
pheeleeppoo

Try this command:

aws s3 sync yourBucketnameDirectory yourLocalDirectory

For example, if your bucket name is myBucket and local directory is c:\local, then:

aws s3 sync s3://myBucket c:\local

For more informations about awscli check this aws cli installation


This piece of art works! This answer link, I am immediately bookmarking. Thanks!
P
Patrick R

Windows User need to download S3EXPLORER from this link which also has installation instructions :- http://s3browser.com/download.aspx Then provide you AWS credentials like secretkey, accesskey and region to the s3explorer, this link contains configuration instruction for s3explorer:Copy Paste Link in brower: s3browser.com/s3browser-first-run.aspx Now your all s3 buckets would be visible on left panel of s3explorer. Simply select the bucket, and click on Buckets menu on top left corner, then select Download all files to option from the menu. Below is the screenshot for the same:

Bucket Selection Screen

Then browse a folder to download the bucket at a particular place Click on OK and your download would begin.


D
Deepak Singhal

aws sync is the perfect solution. It does not do a two way.. it is a one way from source to destination. Also, if you have lots of items in bucket it will be a good idea to create s3 endpoint first so that download happens faster (because download does not happen via internet but via intranet) and no charges


B
Balaji JB

As @layke said, it is the best practice to download the file from the S3 cli it is a safe and secure. But in some cases, people need to use wget to download the file and here is the solution

aws s3 presign s3://<your_bucket_name/>

This will presign will get you temporary public URL which you can use to download content from S3 using the presign_url, in your case using wget or any other download client.


P
Praveen Gowda

You just need to pass --recursive & --include "*"

aws --region "${BUCKET_REGION}" s3 cp s3://${BUCKET}${BUCKET_PATH}/ ${LOCAL_PATH}/tmp --recursive --include "*" 2>&1


J
Jobin Joseph

It's always better to use awscli for downloading / uploading files to s3. Sync will help you to resume without any hassle.

aws s3 sync s3://bucketname/ .

What is new in this answer ?
w
wrschneider

In addition to the suggestions for aws s3 sync, I would also recommend looking at s5cmd (https://github.com/peak/s5cmd).

In my experience I found this to be substantially faster than the AWS CLI for multiple downloads or large downloads.

s5cmd supports wildcards so something like this would work:

s5cmd cp s3://bucket-name/* ./folder


s5cmd uses golang to be 10x faster than awscli joshua-robinson.medium.com/…
yes, I don't fully understand the limiting factors in AWS CLI, or why Golang is so much faster than Python (Python GIL limits multi-threading, maybe?)
u
user1445267

You can use sync to download whole S3 bucket. For example, to download whole bucket named bucket1 on current directory.

aws s3 sync s3://bucket1 .

J
John Hanewich

Here is some stuff to download all buckets, list them, list their contents.

    //connection string
    private static void dBConnection() {
    app.setAwsCredentials(CONST.getAccessKey(), CONST.getSecretKey());
    conn = new AmazonS3Client(app.getAwsCredentials());
    app.setListOfBuckets(conn.listBuckets());
    System.out.println(CONST.getConnectionSuccessfullMessage());
    }

    private static void downloadBucket() {

    do {
        for (S3ObjectSummary objectSummary : app.getS3Object().getObjectSummaries()) {
            app.setBucketKey(objectSummary.getKey());
            app.setBucketName(objectSummary.getBucketName());
            if(objectSummary.getKey().contains(CONST.getDesiredKey())){
                //DOWNLOAD
                try 
                {
                    s3Client = new AmazonS3Client(new ProfileCredentialsProvider());
                    s3Client.getObject(
                            new GetObjectRequest(app.getBucketName(),app.getBucketKey()),
                            new File(app.getDownloadedBucket())
                            );
                } catch (IOException e) {
                    e.printStackTrace();
                }

                do
                {
                     if(app.getBackUpExist() == true){
                        System.out.println("Converting back up file");
                        app.setCurrentPacsId(objectSummary.getKey());
                        passIn = app.getDataBaseFile();
                        CONVERT= new DataConversion(passIn);
                        System.out.println(CONST.getFileDownloadedMessage());
                    }
                }
                while(app.getObjectExist()==true);

                if(app.getObjectExist()== false)
                {
                    app.setNoObjectFound(true);
                }
            }
        }
        app.setS3Object(conn.listNextBatchOfObjects(app.getS3Object()));
    } 
    while (app.getS3Object().isTruncated());
}

/----------------------------Extension Methods-------------------------------------/

//Unzip bucket after download 
public static void unzipBucket() throws IOException {
    unzip = new UnZipBuckets();
    unzip.unZipIt(app.getDownloadedBucket());
    System.out.println(CONST.getFileUnzippedMessage());
}

//list all S3 buckets
public static void listAllBuckets(){
    for (Bucket bucket : app.getListOfBuckets()) {
        String bucketName = bucket.getName();
        System.out.println(bucketName + "\t" + StringUtils.fromDate(bucket.getCreationDate()));
    }
}

//Get the contents from the auto back up bucket
public static void listAllBucketContents(){     
    do {
        for (S3ObjectSummary objectSummary : app.getS3Object().getObjectSummaries()) {
            if(objectSummary.getKey().contains(CONST.getDesiredKey())){
                System.out.println(objectSummary.getKey() + "\t" + objectSummary.getSize() + "\t" + StringUtils.fromDate(objectSummary.getLastModified()));
                app.setBackUpCount(app.getBackUpCount() + 1);   
            }
        }
        app.setS3Object(conn.listNextBatchOfObjects(app.getS3Object()));
    } 
    while (app.getS3Object().isTruncated());
    System.out.println("There are a total of : " + app.getBackUpCount() + " buckets.");
}

}


H
Hubbitus

You may simple get it with s3cmd command:

s3cmd get --recursive --continue s3://test-bucket local-directory/

关注公众号,不定期副业成功案例分享
Follow WeChat

Success story sharing

Want to stay one step ahead of the latest teleworks?

Subscribe Now