ChatGPT解决这个技术问题 Extra ChatGPT

How to rename AWS S3 Bucket

After all the tough work of migration etc, I just realised that I need to serve the content using CNAME (e.g media.abc.com). The bucket name needs to start with media.abc.com/S3/amazon.com to ensure it works perfectly.

I just realised that S3 doesn't allow direct rename from the console.

Is there any way to work around this?


d
duality_

Solution

aws s3 mb s3://[new-bucket]
aws s3 sync s3://[old-bucket] s3://[new-bucket]
aws s3 rb --force s3://[old-bucket]

Explanation

There's no rename bucket functionality for S3 because there are technically no folders in S3 so we have to handle every file within the bucket.

The code above will 1. create a new bucket, 2. copy files over and 3. delete the old bucket. That's it.

If you have lots of files in your bucket and you're worried about the costs, then read on. Behind the scenes what happens is that all the files within the bucket are first copied and then deleted. It should cost an insignificant amount if you have a few thousand files. Otherwise check this answer to see how this would impact you.

Example

In the following example we create and populate the old bucket and then sync the files to the new one. Check the output of the commands to see what AWS does.

> # bucket suffix so we keep it unique
> suffix="ieXiy2"  # used `pwgen -1 -6` to get this
>
> # populate old bucket
> echo "asdf" > asdf.txt
> echo "yxcv" > yxcv.txt
> aws s3 mb s3://old-bucket-$suffix
make_bucket: old-bucket-ieXiy2
> aws s3 cp asdf.txt s3://old-bucket-$suffix/asdf.txt
upload: ./asdf.txt to s3://old-bucket-ieXiy2/asdf.txt
> aws s3 cp yxcv.txt s3://old-bucket-$suffix/yxcv.txt
upload: ./yxcv.txt to s3://old-bucket-ieXiy2/yxcv.txt
>
> # "rename" to new bucket
> aws s3 mb s3://new-bucket-$suffix
make_bucket: new-bucket-ieXiy2
> aws s3 sync s3://old-bucket-$suffix s3://new-bucket-$suffix
copy: s3://old-bucket-ieXiy2/yxcv.txt to s3://new-bucket-ieXiy2/yxcv.txt
copy: s3://old-bucket-ieXiy2/asdf.txt to s3://new-bucket-ieXiy2/asdf.txt
> aws s3 rb --force s3://old-bucket-$suffix
delete: s3://old-bucket-ieXiy2/asdf.txt
delete: s3://old-bucket-ieXiy2/yxcv.txt
remove_bucket: old-bucket-ieXiy2

This answer is the same as the accepted answer, except this posting gives a very helpful step by step example of how to do it. (The example should be shortened, though. There's no need to show creation of an example old bucket and using a suffix variable.) The explanation part of this answer doesn't satisfy me, though. It says the lack of folders in S3 is why this awkward procedure is required. Since the original question didn't mention folders, I don't understand how that explains the inability to rename S3 buckets.
Tried this method...appeared to work...however for some weird reason I can't view any of the items (images). I can navigate via the browser through the items on the s3 dashboard...but cant view them via a url or download them. Any ideas why? Permissions seem to be identical. Is there some special permissions to look out for?
Note that, as far as I'm aware of, when copying objects from one bucket to another, it is currently not possible to preserve their history. That is, you cannot copy an object with all its versions together with their creation date, in case versioning was enabled in the source bucket.
@duality_ can you add into your answer notification about permissions and that you can copy files with --acl bucket-owner-full-control(like in that answer)
Note that you need to specify region as well, otherwise your new buckets will be in US East (North Virginia). E.g. aws --region ap-southeast-2 s3 mb s3://new-bucket
l
liferacer

I think only way is to create a new bucket with correct name and then copy all your objects from old bucket to new bucket. You can do it using Aws CLI.


@Tashows pavitran was asking about chaRges, not chaNges. As far as I know there are indeed charges for copying bucket items, I believe 1 GET and 1 PUT operation cost for each item.
@Tashows Actually there's actually an entry for COPY operations on the pricing table for S3, it's actually just the same cost as doing a PUT (so there's no extra GET cost).
Note you can also cut and paste using the web console, for people who don't want to do this via CLI.
@pavitran If you have objects in glacier or deep archive, notice that there is a minimal time for each object, and if you delete the objects from the old bucket before the minimal time (90 or 180 days), you will be charged for the whole time. Therefore, it may cost you more if you move objects to a new bucket and then delete the old bucket.
You create a new bucket with a copy of the configuration from any of your current buckets. Simply click "Create bucket", name the bucket, then click "Choose bucket" under the "Copy settings from existing bucket" section.
R
Richard A Quadling

Probably a later version of the AWS CLI toolkit provided the mv option.

$ aws --version
aws-cli/1.15.30 Python/3.6.5 Darwin/17.6.0 botocore/1.10.30

I'm renaming buckets using the following command:

aws s3 mv s3://old-bucket s3://new-bucket --recursive

This worked for me. However, it is important to note that the new-bucket must be created first before running the command. Also, the old-bucket will then be empty but NOT deleted. If you want to delete it following the transfer of all files, use the following command (without the angle brackets): aws s3api delete-bucket --bucket <old-bucket> --region <region id>
If the old bucket is in use anywhere, it is obviously good practise to copy the bucket, test with the new destination, and only then, delete the old bucket. aws s3 mv actually copies and deletes, so the financial costs should be the same (I think).
Note that the mv command has this warning: This action creates a copy of the object with updated settings and a new last-modified date in the specified location, and then deletes the original object. Losing the last-modified date might be relevant.