ChatGPT解决这个技术问题 Extra ChatGPT

OAuth secrets in mobile apps

When using the OAuth protocol, you need a secret string obtained from the service you want to delegate to. If you are doing this in a web app, you can simply store the secret in your data base or on the file system, but what is the best way to handle it in a mobile app (or a desktop app for that matter)?

Storing the string in the app is obviously not good, as someone could easily find it and abuse it.

Another approach would be to store it on your server, and have the app fetch it on every run, never storing it on the phone. This is almost as bad, because you have to include the URL in the app.

The only workable solution I can come up with is to first obtain the Access Token as normal (preferably using a web view inside the app), and then route all further communication through our server, which would append the secret to the request data and communicate with the provider. Then again, I'm a security noob, so I'd really like to hear some knowledgeable peoples' opinions on this. It doesn't seem to me that most apps are going to these lengths to guarantee security (for example, Facebook Connect seems to assume that you put the secret into a string right in your app).

Another thing: I don't believe the secret is involved in initially requesting the Access Token, so that could be done without involving our own server. Am I correct?

Sorry if I don't get the obvious, but what is the problem with storing the codes in the application's database? Because those tokens are generated and stored after the user authenticated his account, so it should be safe to assume that said user wants the mobile device to store the access to have access.
Even after the user have authorized you to access their account (on Twitter, say) you have to use a secret that you obtained from the service you're trying to access. This secret is used in all communication with their server, together with the authentication key and some other keys. So yes, you can store the access key, but the secret shouldn't be stored, because it could be used with any authentication key to abuse the service. Again, I would be happy to be corrected by people who know more about this.
OAuth offers an authentication method that secures the original user's login data. To make that possible a new unique login combination is generated that only works together with the unique application's key combination. The big benefit over storing the user's login data is that those are completely safe after first authorization and in any violation case the user can simply revoke the authorization's access. And of course not saving the secret wouldn't make sense as the user would need to reauthenticate then (and that is not what the user wants when giving the application access).
@poke The authentication key that is obtained when the user approves your app with the provider should be saved, but the secret token that you received from the provider before releasing the app should not (in the case of a desktop or mobile app; if it's a web app you can obviously store the key on the server, as stated in the question).
As per my understanding of oAuth-- In case of a desktop app its very easy to sniff/monitor the HTTP/HTTPS traffic with tools like this ieinspector.com/httpanalyzer/index.html Hence your token and token secret both can be found very easily. So the only protection is your consumer-secret. Now if your store the secret inside the app and somebody is able to find it, it becomes a child's play to impersonate any other app as your app. Correct me if I am wrong.

n
noamtm

Yes, this is an issue with the OAuth design that we are facing ourselves. We opted to proxy all calls through our own server. OAuth wasn't entirely flushed out in respect of desktop apps. There is no prefect solution to the issue that I've found without changing OAuth.

If you think about it and ask the question why we have secrets, is mostly for provision and disabling apps. If our secret is compromised, then the provider can only really revoke the entire app. Since we have to embed our secret in the desktop app, we are sorta screwed.

The solution is to have a different secret for each desktop app. OAuth doesn't make this concept easy. One way is have the user go and create an secret on their own and enter the key on their own into your desktop app (some facebook apps did something similar for a long time, having the user go and create facebook to setup their custom quizes and crap). It's not a great experience for the user.

I'm working on proposal for a delegation system for OAuth. The concept is that using our own secret key we get from our provider, we could issue our own delegated secret to our own desktop clients (one for each desktop app basically) and then during the auth process we send that key over to the top level provider that calls back to us and re-validates with us. That way we can revoke on own secrets we issue to each desktop client. (Borrowing a lot of how this works from SSL). This entire system would be prefect for value-add webservices as well that pass on calls to a third party webservice.

The process could also be done without delegation verification callbacks if the top level provider provides an API to generate and revoke new delegated secrets. Facebook is doing something similar by allowing facebook apps to allow users to create sub-apps.

There are some talks about the issue online:

http://blog.atebits.com/2009/02/fixing-oauth/ http://groups.google.com/group/twitter-development-talk/browse_thread/thread/629b03475a3d78a1/de1071bf4b820c14#de1071bf4b820c14

Twitter and Yammer's solution is a authentication pin solution: https://dev.twitter.com/oauth/pin-based https://www.yammer.com/api_oauth_security_addendum.html


This is very interesting, although it confirms what I feared, that OAuth is not so great for desktop/mobile apps. Of course, an attacker would have to first get the secret and then also sniff someone's credentials, so it would take quite some determination. The pin solution is ok for desktop but to heavy-handed for mobile imo.
How would your proposed scheme help value-add web services, since this problem doesn't apply to them? Also, I don't see how it would work with the provider generating new secrets, since you would need a "master secret" to even request those new secrets, so you would at least need one call to your own server (which holds the main secret). But that is of course better than routing all traffic through your own server. Clarification most welcome! And please update here as your proposal progresses!
Just curious: how do you determine that the thing making a call to your proxy server is legitimate?
In response to notJim: the primary risk in allowing your consumer secret to get out is that malicious (or foolish) applications can be developed using it, tarnishing your reputation and increasing your risk of having your legitimate application shut down for API abuse/misuse. By proxying all calls that require your secret through a web application you control, you're back in a position where you can watch for patterns of abuse and revoke access on the user or access token level before the API you're consuming decides to shut down your entire service.
I agree with quasistoic here, you will need to use an SSL enabled browser to deal with the oauth call. This is a good thing for a few reasons, including easily managing any security updates in the future, and nothing in the actual application will need to be updated over time. Zac points out Twitter proposing a PIN solution, which I actually thought up as well, because you cannot trust the application to securely obtain the code. I suggest using a 'Nonce' with a modern encryption along with the PIN and secret to proxy the requests through the web server.
D
Dick Hardt

With OAUth 2.0, you can store the secret on the server. Use the server to acquire an access token that you then move to the app and you can make calls from the app to the resource directly.

With OAuth 1.0 (Twitter), the secret is required to make API calls. Proxying calls through the server is the only way to ensure the secret is not compromised.

Both require some mechanism that your server component knows it is your client calling it. This tends to be done on installation and using a platform specific mechanism to get an app id of some kind in the call to your server.

(I am the editor of the OAuth 2.0 spec)


Can you elaborate on the "platform specific mechanism to get an app id of some kind"? How is the server component to verify the identity of the client? I think this can be done with client provisioning. For example, deploy a new and unique SSL cert to each client. Is that what you mean? If it is more complex than this, maybe you can refer to a more in-depth writeup?
I recall some security people talking about how this could be done. There is a call to the OS that returns a signed token that you can then send to your server and verify. Sorry I don't have the specifics. It is an error that could use some good examples.
@DickHardt but in this scenary how do you ensure that the mobile application is really your app and not a fraudulent one?
F
Felixyz

One solution could be to hard code the OAuth secret into the code, but not as a plain string. Obfuscate it in some way - split it into segments, shift characters by an offset, rotate it - do any or all of these things. A cracker can analyse your byte code and find strings, but the obfuscation code might be hard to figure out.

It's not a foolproof solution, but a cheap one.

Depending on the value of the exploit, some genius crackers can go to greater lengths to find your secret code. You need to weigh the factors - cost of previously mentioned server side solution, incentive for crackers to spend more efforts on finding your secret code, and the complexity of the obfuscation you can implement.


Yes I think this is reasonable. It would take a lot of determination for someone to first extract the consumer secret and then snatch people's credentials to do something mean. For high-profile apps, I'm not sure this would be enough, but for an average app I think you're right that you have to balance implementation time against a pretty minor security threat.
All it takes is for one user to exert the effort and then publish or share your secret. Once your secret is out, the risk of your service being shut down completely for abuse skyrockets, and it's completely out of your control.
Obfuscation is not security at all. This is worse than no security at all, because it gives the developer a false sense of security. en.wikipedia.org/wiki/Security_through_obscurity
"Obfuscation is not security at all. This is worse than no security at all, because it gives the developer a false sense of security." Nonsense. Nobody is saying that obfuscation makes for good security. But if I'm going distribute an OAuth secret with my apk, it's surely better to obfuscate than not. Obfuscation is what Google also recommends when storing keys/secrets in-app. If nothing else, these measures keep casual hackers at bay, which is better than nothing. Blanket statements like yours equates imperfect security with no security. That's simply not true. Imperfect is just imperfect.
Obfuscation does NOT help, because no matter how much shifting or encoding you do, you still construct the key together and use that to build your API request. It is fairly simple to dynamically hook APIs in the right places to dump out the request you're sending before even HTTPS encryption. So please, don't embed secret keys in your app unless there really is no possible alternative.
G
Gudradain

Do not store the secret inside the application.

You need to have a server that can be accessed by the application over https (obviously) and you store the secret on it.

When someone want to login via your mobile/desktop application, your application will simply forward the request to the server that will then append the secret and send it to the service provider. Your server can then tell your application if it was successful or not.

Then if you need to get any sensitive information from the service (facebook, google, twitter, etc), the application ask your server and your server will give it to the application only if it is correctly connected.

There is not really any option except storing it on a server. Nothing on the client side is secure.

Note

That said, this will only protect you against malicious client but not client against malicious you and not client against other malicious clients (phising)...

OAuth is a much better protocol in browser than on desktop/mobile.


doesn't this make the life of the hacker easier?! because now, in order to access the server resources we technically jus need the client id, since the server will anyway append the secret to the request. am I missing something?
@HudiIlfeld Yes you are missing something: the client needs to login to the server. As long as he is not login, the server won't return anything. One way to manage this is after sending the credential for the first time, the server return an access token to the client and then the client send this access token with every future request. There are many options here.
@Gudradain I am not sure how your solution helps here, as all of that can be automated: 1) Client sending client_id to server. 2) Server returning an Access Token for Client to send it in next requests? Why exactly? But let's assume it is okay. 3) An hacker now is authenticated against the server, and is still able to make any API/service requests he wants, still being impersonated behind your proxy server. Am I missing something here?
@IvoPereira Putting the client secret in the application make it easy to steal it. Once someone has your client id and client secret, they can impersonate the client. Varying amount of damage can be done if someone impersonate the client depending on the app. If you want more information, I would suggest that you ask another question (not a comment).
@Gudradain I wasn't suggesting that exact flow either for the reasons you mentioned, however just using a Proxy in the middle wouldn't solve the issue for itself as it would open another free door to a bad actor. However this might help mitigate the issue: medium.com/@benjamin.botto/… (Enhanced Architecture -> Security Considerations)
C
Community

There is a new extension to the Authorization Code Grant Type called Proof Key for Code Exchange (PKCE). With it, you don't need a client secret.

PKCE (RFC 7636) is a technique to secure public clients that don't use a client secret. It is primarily used by native and mobile apps, but the technique can be applied to any public client as well. It requires additional support by the authorization server, so it is only supported on certain providers.

from https://oauth.net/2/pkce/

For more information, you can read the full RFC 7636 or this short introduction.


Beware that this can still lead to Client Impersonation: tools.ietf.org/html/rfc6749#section-10.2
L
LetMyPeopleCode

Here's something to think about. Google offers two methods of OAuth... for web apps, where you register the domain and generate a unique key, and for installed apps where you use the key "anonymous".

Maybe I glossed over something in the reading, but it seems that sharing your webapp's unique key with an installed app is probably more secure than using "anonymous" in the official installed apps method.


J
Joel

With OAuth 2.0 you can simply use the client side flow to obtain an access token and use then this access token to authenticate all further requests. Then you don't need a secret at all.

A nice description of how to implement this can be found here: https://aaronparecki.com/articles/2012/07/29/1/oauth2-simplified#mobile-apps


Provided the service supports "the client side flow". Many do not, instead requiring the client ID and client secret in order to obtain this access token.
b
bpapa

I don't have a ton of experience with OAuth - but doesn't every request require not only the user's access token, but an application consumer key and secret as well? So, even if somebody steals a mobile device and tries to pull data off of it, they would need an application key and secret as well to be able to actually do anything.

I always thought the intention behind OAuth was so that every Tom, Dick, and Harry that had a mashup didn't have to store your Twitter credentials in the clear. I think it solves that problem pretty well despite it's limitations. Also, it wasn't really designed with the iPhone in mind.


You are right, OAuth was mostly designed with web apps in mind and I'm sure it works well for that. Yes you need the consumer token and secret to sign each request, and the problem is where to store the secret. If someone steals the access key it's not a big deal because it can be revoked, but if someone gets the consumer key every copy of your app has been compromised.
OAuth 1 required signing each request. OAuth 2 only requires the access token. Both require the key and secret when acquiring a token.
M
Martin Bayly

I agree with Felixyz. OAuth whilst better than Basic Auth, still has a long way to go to be a good solution for mobile apps. I've been playing with using OAuth to authenticate a mobile phone app to a Google App Engine app. The fact that you can't reliably manage the consumer secret on the mobile device means that the default is to use the 'anonymous' access.

The Google App Engine OAuth implementation's browser authorization step takes you to a page where it contains text like: "The site is requesting access to your Google Account for the product(s) listed below"

YourApp(yourapp.appspot.com) - not affiliated with Google

etc

It takes from the domain/host name used in the callback url that you supply which can be anything on the Android if you use a custom scheme to intercept the callback. So if you use 'anonymous' access or your consumer secret is compromised, then anyone could write a consumer that fools the user into giving access to your gae app.

The Google OAuth authorization page also does contain lots of warnings which have 3 levels of severity depending on whether you're using 'anonymous', consumer secret, or public keys.

Pretty scary stuff for the average user who isn't technically savvy. I don't expect to have a high signup completion percentage with that kind of stuff in the way.

This blog post clarifies how consumer secret's don't really work with installed apps. http://hueniverse.com/2009/02/should-twitter-discontinue-their-basic-auth-api/


C
Community
H
Hugo

Facebook doesn't implement OAuth strictly speaking (yet), but they have implemented a way for you not to embed your secret in your iPhone app: https://web.archive.org/web/20091223092924/http://wiki.developers.facebook.com/index.php/Session_Proxy

As for OAuth, yeah, the more I think about it, we are a bit stuffed. Maybe this will fix it.


wiki.developers.facebook.com is dead.
M
Maximvs

None of these solutions prevent a determined hacker from sniffing packets sent from their mobile device (or emulator) to view the client secret in the http headers.

One solution could be to have a dynamic secret which is made up of a timestamp encrypted with a private 2-way encryption key & algorithm. The service then decrypts the secret and determines if the time stamp is +/- 5 minutes.

In this way, even if the secret is compromised, the hacker will only be able to use it for a maximum of 5 minutes.


D
Daniel Thorpe

I'm also trying to come up with a solution for mobile OAuth authentication, and storing secrets within the application bundle in general.

And a crazy idea just hit me: The simplest idea is to store the secret inside the binary, but obfuscated somehow, or, in other words, you store an encrypted secret. So, that means you've got to store a key to decrypt your secret, which seems to have taken us full circle. However, why not just use a key which is already in the OS, i.e. it's defined by the OS not by your application.

So, to clarify my idea is that you pick a string defined by the OS, it doesn't matter which one. Then encrypt your secret using this string as the key, and store that in your app. Then during runtime, decrypt the variable using the key, which is just an OS constant. Any hacker peeking into your binary will see an encrypted string, but no key.

Will that work?


Good thought, but no. The cracker would just see the binary pointing to the address of the OS constant.
C
Christopher Orr

As others have mentioned, there should be no real issue with storing the secret locally on the device.

On top of that, you can always rely on the UNIX-based security model of Android: only your application can access what you write to the file system. Just write the info to your app's default SharedPreferences object.

In order to obtain the secret, one would have to obtain root access to the Android phone.


As who mentioned? If you mean poke's comment, see my answer that secret != authentication key. The latter can safely be stored, the former can't. I don't know about Android, but gaining root access to an iPhone is not hard at all. Note that the secret is same on all instances of the app, so an attacker would only have to gain access to one binary. And even if they couldn't gain root access on the device, they could get their hands on the binary in some other and pull the secret token out of it.
just to add it is very easy to root android phones as well