ChatGPT解决这个技术问题 Extra ChatGPT

How to capture cURL output to a file?

I have a text document that contains a bunch of URLs in this format:

URL = "sitehere.com"

What I'm looking to do is to run curl -K myfile.txt, and get the output of the response cURL returns, into a file.

How can I do this?

curl http://{one,two}.example.com -o "file_#1.txt" curl.haxx.se/docs/manpage.html

A
Alex2php
curl -K myconfig.txt -o output.txt 

Writes the first output received in the file you specify (overwrites if an old one exists).

curl -K myconfig.txt >> output.txt

Appends all output you receive to the specified file.

Note: The -K is optional.


Sorry maybe I need to clarify - the doc with all my URL's in the format about is called myfile.txt so I do curl -K myfile.txt and it runs though each one but I don't get the output into any file.
I use the redirect for my command lines: curl url > destfile.x
When I do either of these the output still displays in the terminal, not in the file
@kris you probably have an ampersand in the url. put the url in double quotes and then try
It works without the -K. With it, I get "No URL specified."
G
Greg Bray

For a single file you can use -O instead of -o filename to use the last segment of the URL path as the filename. Example:

curl http://example.com/folder/big-file.iso -O

will save the results to a new file named big-file.iso in the current folder. In this way it works similar to wget but allows you to specify other curl options that are not available when using wget.


for multiple files use --remote-name-all unix.stackexchange.com/a/265819/171025
To follow redirect, add -L option.
Another addition for qwr's comment: the argument need to be put before the URLs like curl --remote-name-all https://example.tld/resource{1,2,3}. See: curl.se/docs/manpage.html#--remote-name-all
R
RubenLaguna

There are several options to make curl output to a file

 # saves it to myfile.txt
curl http://www.example.com/data.txt -o myfile.txt

# The #1 will get substituted with the url, so the filename contains the url
curl http://www.example.com/data.txt -o "file_#1.txt" 

# saves to data.txt, the filename extracted from the URL
curl http://www.example.com/data.txt -O 

# saves to filename determined by the Content-Disposition header sent by the server.
curl http://www.example.com/data.txt -O -J 

G
Gabriel Staples

Either curl or wget can be used in this case. All 3 of these commands do the same thing, downloading the file at http://path/to/file.txt and saving it locally into "my_file.txt":

wget http://path/to/file.txt -O my_file.txt  # my favorite--it has a progress bar
curl http://path/to/file.txt -o my_file.txt
curl http://path/to/file.txt > my_file.txt

Notice the first one's -O is the capital letter "O".

The nice thing about the wget command is it shows a nice progress bar.

You can prove the files downloaded by each of the 3 techniques above are exactly identical by comparing their sha512 hashes. Running sha512sum my_file.txt after running each of the commands above, and comparing the results, reveals all 3 files to have the exact same sha hashes (sha sums), meaning the files are exactly identical, byte-for-byte.

See also: wget command to download a file and save as a different filename


J
Jean-François Fabre

For those of you want to copy the cURL output in the clipboard instead of outputting to a file, you can use pbcopy by using the pipe | after the cURL command.

Example: curl https://www.google.com/robots.txt | pbcopy. This will copy all the content from the given URL to your clipboard.

Linux version: curl https://www.google.com/robots.txt | xclip

Windows version: curl https://www.google.com/robots.txt | clip


pbcopy is only available on MacOS. However xclip can be used in it's place for Linux see this question. However I would in most cases prefer curl http://example.com -o example_com.html & cat example_com.html | pbcopy So you wouldn't need to curl again if you accidently clear your clipboard.
Also this should be used with caution if you're unsure of the size of the payload. For example you probably wouldn't want to paste this into a text editor, but opening it in vim no problem. curl http://www.textfiles.com/etext/FICTION/fielding-history-243.txt | pbcopy maybe don't try this!
P
Paolo

Use --trace-ascii output.txt to output the curl details to the file output.txt.


Thanks, the man page mentions that this also outputs the "descriptive information" that -vv displays (SSL info, HTTP verb, headers, ...), which I wanted to store. None of the other answers write that to a file.
A
AlexPixel

You need to add quotation marks between "URL" -o "file_output" otherwise, curl doesn't recognize the URL or the text file name.

Format

curl "url" -o filename

Example

curl "https://en.wikipedia.org/wiki/Quotation_mark" -o output_file.txt

Example_2

curl "https://en.wikipedia.org/wiki/Quotation_mark" > output_file.txt  

Just make sure to add quotation marks.


P
Paolo

If you want to store your output into your desktop, follow the below command using post command in git bash.It worked for me.

curl https://localhost:8080
    --request POST 
    --header "Content-Type: application/json" 
    -o "C:\Desktop\test.txt"

P
Paolo

A tad bit late, but I think the OP was looking for something like:

curl -K myfile.txt --trace-ascii output.txt

C
Cudox

Writes the first output received in the file you specify (overwrites if an old one exists).

curl -K myconfig.txt >> output.txt

The >> appends, it does not overwrite. If you would like to overwrite, use a single >