ChatGPT解决这个技术问题 Extra ChatGPT

Amazon S3 direct file upload from client browser - private key disclosure

I'm implementing a direct file upload from client machine to Amazon S3 via REST API using only JavaScript, without any server-side code. All works fine but one thing is worrying me...

When I send a request to Amazon S3 REST API, I need to sign the request and put a signature into Authentication header. To create a signature, I must use my secret key. But all things happens on a client side, so, the secret key can be easily revealed from page source (even if I obfuscate/encrypt my sources).

How can I handle this? And is it a problem at all? Maybe I can limit specific private key usage only to REST API calls from a specific CORS Origin and to only PUT and POST methods or maybe link key to only S3 and specific bucket? May be there are another authentication methods?

"Serverless" solution is ideal, but I can consider involving some serverside processing, excluding uploading a file to my server and then send in to S3.

Very simple: do not store any secrets client-side. You will need to involve a server to sign the request.
You'll also find that signing and base-64 encoding these requests is much easier server-side. It doesn't seem unreasonable to involve a server here at all. I can understand not wanting to send all of the file bytes to a server and then up to S3, but there's very little benefit to signing the requests client-side, especially since that will be a bit challenging and potentially slow to do client-side (in javascript).
It's 2016, as serverless architecture became quite popular, uploading files directly to S3 is possible with the help of AWS Lambda. See my answer to a similar question: stackoverflow.com/a/40828683/2504317 Basically you'd have a Lambda function as an API signing upload-able URL for each file, and your cliend-side javascript just do a HTTP PUT to the pre-signed URL. I've wrote a Vue component doing such things, the S3 upload related code are library agnostic, have a look and get the idea.
Another 3rd party for HTTP/S POST upload in any S3 bucket. JS3Upload pure HTML5: jfileupload.com/products/js3upload-html5/index.html

s
secretmike

I think what you want is Browser-Based Uploads Using POST.

Basically, you do need server-side code, but all it does is generate signed policies. Once the client-side code has the signed policy, it can upload using POST directly to S3 without the data going through your server.

Here's the official doc links:

Diagram: http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingHTTPPOST.html

Example code: http://docs.aws.amazon.com/AmazonS3/latest/dev/HTTPPOSTExamples.html

The signed policy would go in your html in a form like this:

<html>
  <head>
    ...
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
    ...
  </head>
  <body>
  ...
  <form action="http://johnsmith.s3.amazonaws.com/" method="post" enctype="multipart/form-data">
    Key to upload: <input type="input" name="key" value="user/eric/" /><br />
    <input type="hidden" name="acl" value="public-read" />
    <input type="hidden" name="success_action_redirect" value="http://johnsmith.s3.amazonaws.com/successful_upload.html" />
    Content-Type: <input type="input" name="Content-Type" value="image/jpeg" /><br />
    <input type="hidden" name="x-amz-meta-uuid" value="14365123651274" />
    Tags for File: <input type="input" name="x-amz-meta-tag" value="" /><br />
    <input type="hidden" name="AWSAccessKeyId" value="AKIAIOSFODNN7EXAMPLE" />
    <input type="hidden" name="Policy" value="POLICY" />
    <input type="hidden" name="Signature" value="SIGNATURE" />
    File: <input type="file" name="file" /> <br />
    <!-- The elements after this will be ignored -->
    <input type="submit" name="submit" value="Upload to Amazon S3" />
  </form>
  ...
</html>

Notice the FORM action is sending the file directly to S3 - not via your server.

Every time one of your users wants to upload a file, you would create the POLICY and SIGNATURE on your server. You return the page to the user's browser. The user can then upload a file directly to S3 without going through your server.

When you sign the policy, you typically make the policy expire after a few minutes. This forces your users to talk to your server before uploading. This lets you monitor and limit uploads if you desire.

The only data going to or from your server is the signed URLs. Your secret keys stay secret on the server.


please note that this uses Signature v2 which will soon be replaced by v4: docs.aws.amazon.com/AmazonS3/latest/API/…
Make very sure to add ${filename} to the key name, so for the above example, user/eric/${filename} instead of just user/eric. If user/eric is an already existing folder, the upload will silently fail (you will even be redirected to the success_action_redirect) and the uploaded content will not be there. Just spent hours debugging this thinking it was a permission issue.
@secretmike If you received a timeout from doing this method, how would you recommend circumnavigating that?
@Trip Since the browser is sending the file to S3, you'll need to detect the timeout in Javascript and initiate a retry yourself.
@secretmike That smells like an infinite loop cycle. As the timeout is going to recur indefinitely for any file over 10/mbs.
J
Joomler

You can do this by AWS S3 Cognito try this link here :

http://docs.aws.amazon.com/AWSJavaScriptSDK/guide/browser-examples.html#Amazon_S3

Also try this code

Just change Region, IdentityPoolId and Your bucket name

AWS S3 File Upload

Github


Does this support multiple images?
@user2722667 yes it does.
@Joomler Hi Thanks but i am facing this issue on firefox RequestTimeout Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed and file is does not upload on S3.Can you please help me on how can i fix this issue.Thanks
@usama can you please open the issue in the github because issue is not clear to me
this should be the correct answer @Olegas
B
BraveNewCurrency

You're saying you want a "serverless" solution. But that means you have no ability to put any of "your" code in the loop. (NOTE: Once you give your code to a client, it's "their" code now.) Locking down CORS is not going to help: People can easily write a non-web-based tool (or a web-based proxy) that adds the correct CORS header to abuse your system.

The big problem is that you can't differentiate between the different users. You can't allow one user to list/access his files, but prevent others from doing so. If you detect abuse, there is nothing you can do about it except change the key. (Which the attacker can presumably just get again.)

Your best bet is to create an "IAM user" with a key for your javascript client. Only give it write access to just one bucket. (but ideally, do not enable the ListBucket operation, that will make it more attractive to attackers.)

If you had a server (even a simple micro instance at $20/month), you could sign the keys on your server while monitoring/preventing abuse in realtime. Without a server, the best you can do is periodically monitor for abuse after-the-fact. Here's what I would do:

1) periodically rotate the keys for that IAM user: Every night, generate a new key for that IAM user, and replace the oldest key. Since there are 2 keys, each key will be valid for 2 days.

2) enable S3 logging, and download the logs every hour. Set alerts on "too many uploads" and "too many downloads". You will want to check both total file size and number of files uploaded. And you will want to monitor both the global totals, and also the per-IP address totals (with a lower threshold).

These checks can be done "serverless" because you can run them on your desktop. (i.e. S3 does all the work, these processes just there to alert you to abuse of your S3 bucket so you don't get a giant AWS bill at the end of the month.)


Man, I forgot how complicated things were before Lambda.
R
RajeevJ

Adding more info to the accepted answer, you can refer to my blog to see a running version of the code, using AWS Signature version 4.

Will summarize here:

As soon as the user selects a file to be uploaded, do the followings: 1. Make a call to the web server to initiate a service to generate required params

In this service, make a call to AWS IAM service to get temporary cred Once you have the cred, create a bucket policy (base 64 encoded string). Then sign the bucket policy with the temporary secret access key to generate final signature send the necessary parameters back to the UI Once this is received, create a html form object, set the required params and POST it.

For detailed info, please refer https://wordpress1763.wordpress.com/2016/10/03/browser-based-upload-aws-signature-version-4/


I spent an entire day trying to figure this out in Javascript, and this answer tells me exactly how to do this using XMLhttprequest. I'm very surprised you got downvoted. The OP asked for javascript and got forms in the recommended answers. Good grief. Thanks for this answer!
BTW superagent has serious CORS issues, so xmlhttprequest seems to e the only reasonable way to do this right now
O
OlliM

To create a signature, I must use my secret key. But all things happens on a client side, so, the secret key can be easily revealed from page source (even if I obfuscate/encrypt my sources).

This is where you have misunderstood. The very reason digital signatures are used is so that you can verify something as correct without revealing your secret key. In this case the digital signature is used to prevent the user from modifying the policy you set for the form post.

Digital signatures such as the one here are used for security all around the web. If someone (NSA?) really were able to break them, they would have much bigger targets than your S3 bucket :)


but a robot may try to upload unlimited files quickly. can I set a policy of max files per bucket?
N
Nilesh Pawar

I have given a simple code to upload files from Javascript browser to AWS S3 and list the all files in S3 bucket.

Steps:

To know how to create Create IdentityPoolId http://docs.aws.amazon.com/cognito/latest/developerguide/identity-pools.html Goto S3's console page and open cors configuration from bucket properties and write following XML code into that. GET PUT DELETE HEAD * Create HTML file containing following code change the credentials, open file in browser and enjoy.


Wouldn't anyone be able to use my "IdentityPoolId" for uploading files to my S3 bucket. How is this solution preventing any 3rd party from just copying my "IdentityPoolId" and uploading lots of files to my S3 bucket?
stackoverflow.com/users/4535741/sahil You can prevent data/file uploading from other domains by setting appropriate CORS settings to S3 bucket. So even if anybody accessed your identity pool id they cant manipulate your s3 bucket files.
R
Ruediger Jungbeck

If you don't have any server side code, you security depends on the security of the access to your JavaScript code on the client side (ie everybody who has the code could upload something).

So I would recommend, to simply create a special S3 bucket which is public writeable (but not readable), so you don't need any signed components on the client side.

The bucket name (a GUID eg) will be your only defense against malicious uploads (but a potential attacker could not use your bucket to transfer data, because it is write only to him)


S
Samir Patel

Here is how you generate a policy document using node and serverless

"use strict";

const uniqid = require('uniqid');
const crypto = require('crypto');

class Token {

    /**
     * @param {Object} config SSM Parameter store JSON config
     */
    constructor(config) {

        // Ensure some required properties are set in the SSM configuration object
        this.constructor._validateConfig(config);

        this.region = config.region; // AWS region e.g. us-west-2
        this.bucket = config.bucket; // Bucket name only
        this.bucketAcl = config.bucketAcl; // Bucket access policy [private, public-read]
        this.accessKey = config.accessKey; // Access key
        this.secretKey = config.secretKey; // Access key secret

        // Create a really unique videoKey, with folder prefix
        this.key = uniqid() + uniqid.process();

        // The policy requires the date to be this format e.g. 20181109
        const date = new Date().toISOString();
        this.dateString = date.substr(0, 4) + date.substr(5, 2) + date.substr(8, 2);

        // The number of minutes the policy will need to be used by before it expires
        this.policyExpireMinutes = 15;

        // HMAC encryption algorithm used to encrypt everything in the request
        this.encryptionAlgorithm = 'sha256';

        // Client uses encryption algorithm key while making request to S3
        this.clientEncryptionAlgorithm = 'AWS4-HMAC-SHA256';
    }

    /**
     * Returns the parameters that FE will use to directly upload to s3
     *
     * @returns {Object}
     */
    getS3FormParameters() {
        const credentialPath = this._amazonCredentialPath();
        const policy = this._s3UploadPolicy(credentialPath);
        const policyBase64 = new Buffer(JSON.stringify(policy)).toString('base64');
        const signature = this._s3UploadSignature(policyBase64);

        return {
            'key': this.key,
            'acl': this.bucketAcl,
            'success_action_status': '201',
            'policy': policyBase64,
            'endpoint': "https://" + this.bucket + ".s3-accelerate.amazonaws.com",
            'x-amz-algorithm': this.clientEncryptionAlgorithm,
            'x-amz-credential': credentialPath,
            'x-amz-date': this.dateString + 'T000000Z',
            'x-amz-signature': signature
        }
    }

    /**
     * Ensure all required properties are set in SSM Parameter Store Config
     *
     * @param {Object} config
     * @private
     */
    static _validateConfig(config) {
        if (!config.hasOwnProperty('bucket')) {
            throw "'bucket' is required in SSM Parameter Store Config";
        }
        if (!config.hasOwnProperty('region')) {
            throw "'region' is required in SSM Parameter Store Config";
        }
        if (!config.hasOwnProperty('accessKey')) {
            throw "'accessKey' is required in SSM Parameter Store Config";
        }
        if (!config.hasOwnProperty('secretKey')) {
            throw "'secretKey' is required in SSM Parameter Store Config";
        }
    }

    /**
     * Create a special string called a credentials path used in constructing an upload policy
     *
     * @returns {String}
     * @private
     */
    _amazonCredentialPath() {
        return this.accessKey + '/' + this.dateString + '/' + this.region + '/s3/aws4_request';
    }

    /**
     * Create an upload policy
     *
     * @param {String} credentialPath
     *
     * @returns {{expiration: string, conditions: *[]}}
     * @private
     */
    _s3UploadPolicy(credentialPath) {
        return {
            expiration: this._getPolicyExpirationISODate(),
            conditions: [
                {bucket: this.bucket},
                {key: this.key},
                {acl: this.bucketAcl},
                {success_action_status: "201"},
                {'x-amz-algorithm': 'AWS4-HMAC-SHA256'},
                {'x-amz-credential': credentialPath},
                {'x-amz-date': this.dateString + 'T000000Z'}
            ],
        }
    }

    /**
     * ISO formatted date string of when the policy will expire
     *
     * @returns {String}
     * @private
     */
    _getPolicyExpirationISODate() {
        return new Date((new Date).getTime() + (this.policyExpireMinutes * 60 * 1000)).toISOString();
    }

    /**
     * HMAC encode a string by a given key
     *
     * @param {String} key
     * @param {String} string
     *
     * @returns {String}
     * @private
     */
    _encryptHmac(key, string) {
        const hmac = crypto.createHmac(
            this.encryptionAlgorithm, key
        );
        hmac.end(string);

        return hmac.read();
    }

    /**
     * Create an upload signature from provided params
     * https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html#signing-request-intro
     *
     * @param policyBase64
     *
     * @returns {String}
     * @private
     */
    _s3UploadSignature(policyBase64) {
        const dateKey = this._encryptHmac('AWS4' + this.secretKey, this.dateString);
        const dateRegionKey = this._encryptHmac(dateKey, this.region);
        const dateRegionServiceKey = this._encryptHmac(dateRegionKey, 's3');
        const signingKey = this._encryptHmac(dateRegionServiceKey, 'aws4_request');

        return this._encryptHmac(signingKey, policyBase64).toString('hex');
    }
}

module.exports = Token;

The configuration object used is stored in SSM Parameter Store and looks like this

{
    "bucket": "my-bucket-name",
    "region": "us-west-2",
    "bucketAcl": "private",
    "accessKey": "MY_ACCESS_KEY",
    "secretKey": "MY_SECRET_ACCESS_KEY",
}

J
Jason

If you are willing to use a 3rd party service, auth0.com supports this integration. The auth0 service exchanges a 3rd party SSO service authentication for an AWS temporary session token will limited permissions.

See: https://github.com/auth0-samples/auth0-s3-sample/
and the auth0 documentation.


As I understand - now we have Cognito for that?
L
Le Dong Thuc

I created a UI based on VueJS and Go to upload binary to AWS Secrets Manager https://github.com/ledongthuc/awssecretsmanagerui

It's helpful to upload a secured file and update text data easier. You can reference if you want.