ChatGPT解决这个技术问题 Extra ChatGPT

client secret in OAuth 2.0

To use google drive api, I have to play with the authentication using OAuth2.0. And I got a few question about this.

Client id and client secret are used to identify what my app is. But they must be hardcoded if it is a client application. So, everyone can decompile my app and extract them from source code. Does it mean that a bad app can pretend to be a good app by using the good app's client id and secret? So user would be showing a screen that asking for granting permission to a good app even though it is actually asked by a bad app? If yes, what should I do? Or actually I should not worry about this? In mobile application, we can embedded a webview to our app. And it is easy to extract the password field in the webview because the app that asking for permission is actually a "browser". So, OAuth in mobile application does not have the benefit that client application has not access to the user credential of service provider?

Also I guess people are usually suspicions when app asks them for their Facebook, Twitter, Dropbox or other credentials. I doubt many ordinary people read OAuth spec and say "Now I am safe" but instead use common sense and generally not use apps they don't trust.
Really a great question definitely should have more points
you could just download the ClientId and secret from your server and save it in a keychain on first successful login that's it
@Sharvan I may be wrong but i think keychains are vulnerable on rooted phones, so your client secret could be made public.

h
hideaki

I had the same question as the question 1 here, and did some research myself recently, and my conclusion is that it is ok to not keep "client secret" a secret. The type of clients that do not keep confidentiality of client secret is called "public client" in the OAuth2 spec. The possibility of someone malicious being able to get authorization code, and then access token, is prevented by the following facts.

1. Client need to get authorization code directly from the user, not from the service

Even if user indicates the service that he/she trusts the client, the client cannot get authorization code from the service just by showing client id and client secret. Instead, the client has to get the authorization code directly from the user. (This is usually done by URL redirection, which I will talk about later.) So, for the malicious client, it is not enough to know client id/secret trusted by the user. It has to somehow involve or spoof user to give it the authorization code, which should be harder than just knowing client id/secret.

2. Redirect URL is registered with client id/secret

Let’s assume that the malicious client somehow managed to involve the user and make her/him click "Authorize this app" button on the service page. This will trigger the URL redirect response from the service to user’s browser with the authorization code with it. Then the authorization code will be sent from user’s browser to the redirect URL, and the client is supposed to be listening at the redirect URL to receive the authorization code. (The redirect URL can be localhost too, and I figured that this is a typical way that a “public client” receives authorization code.) Since this redirect URL is registered at the service with the client id/secret, the malicious client does not have a way to control where the authorization code is given to. This means the malicious client with your client id/secret has another obstacle to obtain the user’s authorization code.


This is promising, do you have any references for this? It would be reassuring to know.
I saw in the Drive docs that in installed apps the client secret is not really a secret, but they did not explain why it is ok to store it there. Your explanation helped a lot!
C
Community

I started writing a comment to your question but then found out there is too much to say so here are my views on the subject in the answer.

Yes there is a real possibility for this and there were some exploits based on this. Suggestion is not to keep the app secret in your app, there is even part in the spec that distributed apps should not use this token. Now you might ask, but XYZ requires it in order to work. In that case they are not implementing the spec properly and you should A not use that service (not likely) or B try to secure token using some obfuscating methods to make it harder to find or use your server as a proxy. For example there were some bugs in Facebook library for Android where it was leaking tokens to Logs, you can find out more about it here http://attack-secure.com/all-your-facebook-access-tokens-are-belong-to-us and here https://www.youtube.com/watch?v=twyL7Uxe6sk. All in all be extra cautious of your usage of third party libraries (common sense actually but if token hijacking is your big concern add another extra to cautious). I have been ranting about the point 2 for quite some time. I have even done some workarounds in my apps in order to modify the consent pages (for example changing zoom and design to fit the app) but there was nothing stopping me from reading values from fields inside the web view with username and password. Therefore I totally agree with your second point and find it a big "bug" in OAuth spec. Point being "App doesn't get access to users credentials" in the spec is just a dream and gives users false sense of security… Also I guess people are usually suspicions when app asks them for their Facebook, Twitter, Dropbox or other credentials. I doubt many ordinary people read OAuth spec and say "Now I am safe" but instead use common sense and generally not use apps they don't trust.


Your client id and client secret won't be secure just because you post them in an SSL tunnel. Yes, they are more secure from man in the middle attacks. If a user proxies your HTTPs call they can accept the bad certificate and see everything you post. By the way, this is the easiest way to steal someones client secret on mobile devices.
I appreciate your comment but can't connect it with my answer in any way... Could you please elaborate why you commented on my answer because I explicitly stated that Client secret shouldn't be used in distributed apps and the other point was that there are workarounds to get user credentials in the apps even if using OAuth so users should have trust in app provider and not the OAuth.
Also I don't understand what you mean by "If a user proxies your HTTPs call" yes users get access to data they sent using HTTPs and they are free to proxy calls however they like. As I understood you are suggesting this as quite a nice alternative to disassembling the apk to get the secret but then again you shouldn't sent app secret in the first place.
So for point 1) the bad app needs to have access to the same system and retrieve the access/refresh token from the same device?
It is not clear what you regard as "same system " in this context. App creates a webview in which confirmation page is shown and can access all data in that view (including cookies or url params hosting the access token). Cross app access is also possible in some cases, for example if one app can access other app logs it could find the token there as mentioned with fb lib bug.
v
v.j

Answering to 2nd question: Google APIs for security reason mandate that authentication/sign-in cannot be done within App itself (like webviews are not allowed) and needs to be done outside app using Browser for better security which is further explained below: https://developers.googleblog.com/2016/08/modernizing-oauth-interactions-in-native-apps.html


at least it is "fixed" 3 years after I had asked :)