Requests 是一个非常好的库。我想用它来下载大文件(> 1GB)。问题是不可能将整个文件保存在内存中。我需要分块阅读。这是以下代码的问题:
import requests
def DownloadFile(url)
local_filename = url.split('/')[-1]
r = requests.get(url)
f = open(local_filename, 'wb')
for chunk in r.iter_content(chunk_size=512 * 1024):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
f.close()
return
由于某种原因,它不能以这种方式工作;在将响应保存到文件之前,它仍会将响应加载到内存中。
使用以下流式代码,无论下载文件的大小如何,Python 内存使用都会受到限制:
def download_file(url):
local_filename = url.split('/')[-1]
# NOTE the stream=True parameter below
with requests.get(url, stream=True) as r:
r.raise_for_status()
with open(local_filename, 'wb') as f:
for chunk in r.iter_content(chunk_size=8192):
# If you have chunk encoded response uncomment if
# and set chunk_size parameter to None.
#if chunk:
f.write(chunk)
return local_filename
请注意,使用 iter_content
返回的字节数不完全是 chunk_size
;预计它是一个通常更大的随机数,并且预计在每次迭代中都会有所不同。
如需进一步参考,请参阅 body-content-workflow 和 Response.iter_content。
如果使用 Response.raw
和 shutil.copyfileobj()
会容易得多:
import requests
import shutil
def download_file(url):
local_filename = url.split('/')[-1]
with requests.get(url, stream=True) as r:
with open(local_filename, 'wb') as f:
shutil.copyfileobj(r.raw, f)
return local_filename
这将文件流式传输到磁盘而不使用过多的内存,并且代码很简单。
注意:根据 documentation,Response.raw
不会解码 gzip
和 deflate
传输编码,因此您需要手动执行此操作。
.raw
的一个小警告是它不处理解码。此处的文档中提到:docs.python-requests.org/en/master/user/quickstart/…
read
method: response.raw.read = functools.partial(response.raw.read, decode_content=True)
shutil.copyfileobj(r.raw, f, length=16*1024*1024)
不完全是 OP 的要求,但是......使用 urllib
很容易做到这一点:
from urllib.request import urlretrieve
url = 'http://mirror.pnl.gov/releases/16.04.2/ubuntu-16.04.2-desktop-amd64.iso'
dst = 'ubuntu-16.04.2-desktop-amd64.iso'
urlretrieve(url, dst)
或者这样,如果你想把它保存到一个临时文件中:
from urllib.request import urlopen
from shutil import copyfileobj
from tempfile import NamedTemporaryFile
url = 'http://mirror.pnl.gov/releases/16.04.2/ubuntu-16.04.2-desktop-amd64.iso'
with urlopen(url) as fsrc, NamedTemporaryFile(delete=False) as fdst:
copyfileobj(fsrc, fdst)
我观察了这个过程:
watch 'ps -p 18647 -o pid,ppid,pmem,rsz,vsz,comm,args; ls -al *.iso'
我看到文件在增长,但内存使用量保持在 17 MB。我错过了什么吗?
from urllib import urlretrieve
您的块大小可能太大,您是否尝试过删除它 - 一次可能 1024 个字节? (另外,您可以使用 with
来整理语法)
def DownloadFile(url):
local_filename = url.split('/')[-1]
r = requests.get(url)
with open(local_filename, 'wb') as f:
for chunk in r.iter_content(chunk_size=1024):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
return
顺便说一句,您如何推断响应已加载到内存中?
听起来好像 python 没有将数据刷新到文件,从其他 SO questions 您可以尝试 f.flush()
和 os.fsync()
强制文件写入并释放内存;
with open(local_filename, 'wb') as f:
for chunk in r.iter_content(chunk_size=1024):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
f.flush()
os.fsync(f.fileno())
f.flush(); os.fsync()
可能会强制写入内存。
os.fsync(f.fileno())
def DownloadFile(url)
后遗漏了一个冒号 (':')
请改用 python 的 wget
模块。这是一个片段
import wget
wget.download(url)
基于上面罗马人最赞成的评论,这是我的实现,包括“下载为”和“重试”机制:
def download(url: str, file_path='', attempts=2):
"""Downloads a URL content into a file (with large file support by streaming)
:param url: URL to download
:param file_path: Local file name to contain the data downloaded
:param attempts: Number of attempts
:return: New file path. Empty string if the download failed
"""
if not file_path:
file_path = os.path.realpath(os.path.basename(url))
logger.info(f'Downloading {url} content to {file_path}')
url_sections = urlparse(url)
if not url_sections.scheme:
logger.debug('The given url is missing a scheme. Adding http scheme')
url = f'http://{url}'
logger.debug(f'New url: {url}')
for attempt in range(1, attempts+1):
try:
if attempt > 1:
time.sleep(10) # 10 seconds wait time between downloads
with requests.get(url, stream=True) as response:
response.raise_for_status()
with open(file_path, 'wb') as out_file:
for chunk in response.iter_content(chunk_size=1024*1024): # 1MB chunks
out_file.write(chunk)
logger.info('Download finished successfully')
return file_path
except Exception as ex:
logger.error(f'Attempt #{attempt} failed with error: {ex}')
return ''
requests 很好,但是 socket 解决方案呢?
def stream_(host):
import socket
import ssl
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock:
context = ssl.create_default_context(Purpose.CLIENT_AUTH)
with context.wrap_socket(sock, server_hostname=host) as wrapped_socket:
wrapped_socket.connect((socket.gethostbyname(host), 443))
wrapped_socket.send(
"GET / HTTP/1.1\r\nHost:thiscatdoesnotexist.com\r\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9\r\n\r\n".encode())
resp = b""
while resp[-4:-1] != b"\r\n\r":
resp += wrapped_socket.recv(1)
else:
resp = resp.decode()
content_length = int("".join([tag.split(" ")[1] for tag in resp.split("\r\n") if "content-length" in tag.lower()]))
image = b""
while content_length > 0:
data = wrapped_socket.recv(2048)
if not data:
print("EOF")
break
image += data
content_length -= len(data)
with open("image.jpeg", "wb") as file:
file.write(image)
不定期副业成功案例分享
chunk_size
至关重要。默认为 1(1 个字节)。这意味着对于 1MB 它将进行 100 万次迭代。 docs.python-requests.org/en/latest/api/…f.flush()
不会将数据刷新到物理磁盘。它将数据传输到操作系统。通常,除非出现电源故障,否则就足够了。f.flush()
无缘无故地让代码变慢。当相应的文件缓冲区(应用程序内部)已满时,会发生刷新。如果您需要更频繁的写入;将 buf.size 参数传递给open()
。if chunk: # filter out keep-alive new chunks
– 这是多余的,不是吗?由于iter_content()
总是产生字符串并且从不产生None
,它看起来像是过早的优化。我也怀疑它是否会产生空字符串(我无法想象任何原因)。