ChatGPT解决这个技术问题 Extra ChatGPT

CURL to access a page that requires a login from a different page

I have 2 pages: xyz.example/a and xyz.example/b. I can only access xyz.example/b if and only if I login to xyz.example/a first. If accessing xyz.example/b without going through the other, I simply get access denied (no redirect to login) via the browser. Once I login at xyz.example/a, I can access the other.

My problem is doing this using the curl command. I can login successfully to xyz.example/a using curl, but then try xyz.example/b and I get access denied.

I use the following:

curl --user user:pass https://xyz.example/a  #works ok
curl https://xyz.example/b #doesn't work

I've tried using the second line with & without the user/password part and still doesn't work. Both pages uses the same CA, so that's not a problem.


S
Stephen Ostermiller

The web site likely uses cookies to store your session information. When you run

curl --user user:pass https://xyz.example/a  #works ok
curl https://xyz.example/b #doesn't work

curl is run twice, in two separate sessions. Thus when the second command runs, the cookies set by the 1st command are not available; it's just as if you logged in to page a in one browser session, and tried to access page b in a different one.

What you need to do is save the cookies created by the first command:

curl --user user:pass --cookie-jar ./somefile https://xyz.example/a

and then read them back in when running the second:

curl --cookie ./somefile https://xyz.example/b

Alternatively you can try downloading both files in the same command, which I think will use the same cookies.


This is strange because when I try this, it does not work as the cookie stored in somefile contains the path parameter (/a in this case) and it is not forwarded to the second call. If I edit the cookie in the file and put a slash only, it works (cookie forwarded to the second call). Do you know if it's possible to prevent the storage of the path in the cookie file?
u
user

Also you might want to log in via browser and get the command with all headers including cookies:

Open the Network tab of Developer Tools, log in, navigate to the needed page, use "Copy as cURL".

https://i.stack.imgur.com/DePbs.png


This is a deep answer! It does not answer the question directly, because it shows how to answer it.
that is one of the most useful answers. It really allows you to observe and understand even multi-step authentication.
I did not know about this feature buried in those awesome tools. Super useful!
That would be just what I need, if I could only find it in my Browser(s). Do you have to enable this somehow? It is missing in my context menu. FF 97.0.2, Linux.
J
Joe Mills

After some googling I found this:

curl -c cookie.txt -d "LoginName=someuser" -d "password=somepass" https://oursite/a
curl -b cookie.txt https://oursite/b

No idea if it works, but it might lead you in the right direction.


This will work if you have a web site that expects a form to be submitted. You will need to look at the source and find the
image and see what is the name of the fields and what URL needs to be posted to. You can use your web browser debugger to look for the post request so you can see what is being sent. That is a little easier.
T
Timothy C. Quinn

My answer is a mod of some prior answers from @JoeMills and @user.

Get a cURL command to log into server: Load login page for website and open Network pane of Developer Tools In firefox, right click page, choose 'Inspect Element (Q)' and click on Network tab Go to login form, enter username, password and log in After you have logged in, go back to Network pane and scroll to the top to find the POST entry. Right click and choose Copy -> Copy as CURL Paste this to a text editor and try this in command prompt to see if it works Its possible that some sites have hardening that will block this type of login spoofing that would require more steps below to bypass. Modify cURL command to be able to save session cookie after login Remove the entry -H 'Cookie: ' Add after curl at beginning -c login_cookie.txt Try running this updated curl command and you should get a new file 'login_cookie.txt' in the same folder Call a new web page using this new cookie that requires you to be logged in curl -b login_cookie.txt

I have tried this on Ubuntu 20.04 and it works like a charm.


And if the site request a captcha too, how it must the argument writen?
Not possible with cURL. Maybe using something like puppeteer to programmatically navigate the page.