ChatGPT解决这个技术问题 Extra ChatGPT

What is the quickest way to HTTP GET in Python?

What is the quickest way to HTTP GET in Python if I know the content will be a string? I am searching the documentation for a quick one-liner like:

contents = url.get("http://example.com/foo/bar")

But all I can find using Google are httplib and urllib - and I am unable to find a shortcut in those libraries.

Does standard Python 2.5 have a shortcut in some form as above, or should I write a function url_get?

I would prefer not to capture the output of shelling out to wget or curl.

I thought I would pass this along, as it had me stumped for hours. I tried getting the text that visually appeared in the browser, but instead got snippets of a web app. The solution was to go into the browser Developer Tools, click on the Network tab, and reload the page. In the list of files that came over the network, I could see the text file I wanted. I could right-click on it and "Open in new tab" to verify.

B
Boris Verkhovskiy

Python 3:

import urllib.request
contents = urllib.request.urlopen("http://example.com/foo/bar").read()

Python 2:

import urllib2
contents = urllib2.urlopen("http://example.com/foo/bar").read()

Documentation for urllib.request and read.


Does everything get cleaned up nicely? It looks like I should call close after your read. Is that necessary?
It is good practice to close it, but if you're looking for a quick one-liner, you could omit it. :-)
The object returned by urlopen will be deleted (and finalized, which closes it) when it falls out of scope. Because Cpython is reference-counted, you can rely on that happening immediately after the read. But a with block would be clearer and safer for Jython, etc.
It doesn't work with HTTPS-only websites. requests works fine
If you're using Amazon Lambda and need to get a URL, the 2.x solution is available and built-in. It does seem to work with https as well. It's nothing more than r = urllib2.urlopen("http://blah.com/blah") and then text = r.read(). It is sync, it just waits for the result in "text".
B
Boris Verkhovskiy

Use the Requests library:

import requests
r = requests.get("http://example.com/foo/bar")

Then you can do stuff like this:

>>> print(r.status_code)
>>> print(r.headers)
>>> print(r.content)  # bytes
>>> print(r.text)     # r.content as str

Install Requests by running this command:

pip install requests

Almost any Python library can be used in AWS Lambda. For pure Python, you just need to "vendor" that library (copy into your module's folders rather than using pip install). For non-pure libraries, there's an extra step -- you need to pip install the lib onto an instance of AWS Linux (the same OS variant lambdas run under), then copy those files instead so you'll have binary compatibility with AWS Linux. The only libraries you won't always be able to use in Lambda are those with binary distributions only, which are thankfully pretty rare.
@lawphotog this DOES work with python3, but you have to pip install requests.
Even the urllib2 standard library recommends requests
In regards to Lambda: if you do wish to use requests in AWS Lambda functions. There is a preinstalled boto3 requests library also. from botocore.vendored import requests Usage response = requests.get('...')
@kmjb borrowing requests from botocore has been deprecated aws.amazon.com/blogs/developer/… and--imo--it's a bad idea to rely on indirect dependencies
M
Manos Nikolaidis

If you want solution with httplib2 to be oneliner consider instantiating anonymous Http object

import httplib2
resp, content = httplib2.Http().request("http://example.com/foo/bar")

h
hennr

Have a look at httplib2, which - next to a lot of very useful features - provides exactly what you want.

import httplib2

resp, content = httplib2.Http().request("http://example.com/foo/bar")

Where content would be the response body (as a string), and resp would contain the status and response headers.

It doesn't come included with a standard python install though (but it only requires standard python), but it's definitely worth checking out.


佚名

It's simple enough with the powerful urllib3 library.

Import it like this:

import urllib3

http = urllib3.PoolManager()

And make a request like this:

response = http.request('GET', 'https://example.com')

print(response.data) # Raw data.
print(response.data.decode('utf-8')) # Text.
print(response.status) # Status code.
print(response.headers['Content-Type']) # Content type.

You can add headers too:

response = http.request('GET', 'https://example.com', headers={
    'key1': 'value1',
    'key2': 'value2'
})

More info can be found on the urllib3 documentation.

urllib3 is much safer and easier to use than the builtin urllib.request or http modules and is stable.


great for the fact you can easily provide an HTTP verb
g
greatvovan

Actually in Python we can read from HTTP responses like from files, here is an example for reading JSON from an API.

import json
from urllib.request import urlopen

with urlopen(url) as f:
    resp = json.load(f)

return resp['some_key']

Though we thank you for your answer, it would be better if it provided additional value on top of the other answers. In this case, your answer does not provide additional value, since another user already posted that solution. If a previous answer was helpful to you, you should vote it up instead of repeating the same information.
This is an old request/answer but I found value in this because it has the elegant with... syntax that I could just grab.
This question adds value as it uses the with construct which is much discussed in the comments on the top-voted and accepted answer, yet lacking from it.
m
michael_s

Without further necessary imports this solution works (for me) - also with https:

try:
    import urllib2 as urlreq # Python 2.x
except:
    import urllib.request as urlreq # Python 3.x
req = urlreq.Request("http://example.com/foo/bar")
req.add_header('User-Agent', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36')
urlreq.urlopen(req).read()

I often have difficulty grabbing the content when not specifying a "User-Agent" in the header information. Then usually the requests are cancelled with something like: urllib2.HTTPError: HTTP Error 403: Forbidden or urllib.error.HTTPError: HTTP Error 403: Forbidden.


Unexpectedly, the 'User-Agent' for Microsoft Edge really is something like Mozilla/5.0 (Windows NT 10.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.135 Safari/537.36 Edge/12.10136 according to stackoverflow.com/questions/30591706/…. Not sure how to find out the most recent Microsoft Edge UA string, but the answer here rightly hints at the way to solve it.
C
Ciro Santilli Путлер Капут 六四事

How to also send headers

Python 3:

import urllib.request
contents = urllib.request.urlopen(urllib.request.Request(
    "https://api.github.com/repos/cirosantilli/linux-kernel-module-cheat/releases/latest",
    headers={"Accept" : 'application/vnd.github.full+json"text/html'}
)).read()
print(contents)

Python 2:

import urllib2
contents = urllib2.urlopen(urllib2.Request(
    "https://api.github.com",
    headers={"Accept" : 'application/vnd.github.full+json"text/html'}
)).read()
print(contents)

X
Xuan

theller's solution for wget is really useful, however, i found it does not print out the progress throughout the downloading process. It's perfect if you add one line after the print statement in reporthook.

import sys, urllib

def reporthook(a, b, c):
    print "% 3.1f%% of %d bytes\r" % (min(100, float(a * b) / c * 100), c),
    sys.stdout.flush()
for url in sys.argv[1:]:
    i = url.rfind("/")
    file = url[i+1:]
    print url, "->", file
    urllib.urlretrieve(url, file, reporthook)
print

t
theller

Here is a wget script in Python:

# From python cookbook, 2nd edition, page 487
import sys, urllib

def reporthook(a, b, c):
    print "% 3.1f%% of %d bytes\r" % (min(100, float(a * b) / c * 100), c),
for url in sys.argv[1:]:
    i = url.rfind("/")
    file = url[i+1:]
    print url, "->", file
    urllib.urlretrieve(url, file, reporthook)
print

佚名

If you want a lower level API:

import http.client

conn = http.client.HTTPSConnection('example.com')
conn.request('GET', '/')

resp = conn.getresponse()
content = resp.read()

conn.close()

text = content.decode('utf-8')

print(text)

A
Akshar

Excellent solutions Xuan, Theller.

For it to work with python 3 make the following changes

import sys, urllib.request

def reporthook(a, b, c):
    print ("% 3.1f%% of %d bytes\r" % (min(100, float(a * b) / c * 100), c))
    sys.stdout.flush()
for url in sys.argv[1:]:
    i = url.rfind("/")
    file = url[i+1:]
    print (url, "->", file)
    urllib.request.urlretrieve(url, file, reporthook)
print

Also, the URL you enter should be preceded by a "http://", otherwise it returns a unknown url type error.


K
Kimmo

If you are working with HTTP APIs specifically, there are also more convenient choices such as Nap.

For example, here's how to get gists from Github since May 1st 2014:

from nap.url import Url
api = Url('https://api.github.com')

gists = api.join('gists')
response = gists.get(params={'since': '2014-05-01T00:00:00Z'})
print(response.json())

More examples: https://github.com/kimmobrunfeldt/nap#examples


You should mention that you are the author of this library.
P
Pedro Lobito

For python >= 3.6, you can use dload:

import dload
t = dload.text(url)

For json:

j = dload.json(url)

Install:
pip install dload


The OP wanted to make a GET request WITHOUT using a library, while this solution requires you to install a package using pip and import the library.
@YılmazAlpaslan OP asked for no such thing, that was an edit someone made to the title of the question that I have rolled back. The actual problem with this answer is it's recommending some weird library that no one is using.
As far a I understood, the op asked for the "quickest way to HTTP GET in Python" , based on that, you can use the dload library, even if not many users use it, something that's not a requirement for an answer. Just a guess, but I don't think you understood the question properly, but reading other answers may giving you clue because many different libraries are also recommended.