ChatGPT解决这个技术问题 Extra ChatGPT

Lazy Method for Reading Big File in Python?

I have a very big file 4GB and when I try to read it my computer hangs. So I want to read it piece by piece and after processing each piece store the processed piece into another file and read next piece.

Is there any method to yield these pieces ?

I would love to have a lazy method.


B
Boštjan Mejak

To write a lazy function, just use yield:

def read_in_chunks(file_object, chunk_size=1024):
    """Lazy function (generator) to read a file piece by piece.
    Default chunk size: 1k."""
    while True:
        data = file_object.read(chunk_size)
        if not data:
            break
        yield data


with open('really_big_file.dat') as f:
    for piece in read_in_chunks(f):
        process_data(piece)

Another option would be to use iter and a helper function:

f = open('really_big_file.dat')
def read1k():
    return f.read(1024)

for piece in iter(read1k, ''):
    process_data(piece)

If the file is line-based, the file object is already a lazy generator of lines:

for line in open('really_big_file.dat'):
    process_data(line)

Good practice to use open('really_big_file.dat', 'rb') for compatibility with our Posix-challenged Windows using colleagues.
Missing rb as @Tal Weiss mentioned; and missing a file.close() statement (could use with open('really_big_file.dat', 'rb') as f: to accomplish same; See here for another concise implementation
@cod3monk3y: text and binary files are different things. Both types are useful but in different cases. The default (text) mode may be useful here i.e., 'rb' is not missing.
@j-f-sebastian: true, the OP did not specify whether he was reading textual or binary data. But if he's using python 2.7 on Windows and is reading binary data, it is certainly worth noting that if he forgets the 'b' his data will very likely be corrupted. From the docs - Python on Windows makes a distinction between text and binary files; [...] it’ll corrupt binary data like that in JPEG or EXE files. Be very careful to use binary mode when reading and writing such files.
Here's a generator that returns 1k chunks: buf_iter = (x for x in iter(lambda: buf.read(1024), '')). Then for chunk in buf_iter: to loop through the chunks.
n
nbro

file.readlines() takes in an optional size argument which approximates the number of lines read in the lines returned.

bigfile = open('bigfilename','r')
tmp_lines = bigfile.readlines(BUF_SIZE)
while tmp_lines:
    process([line for line in tmp_lines])
    tmp_lines = bigfile.readlines(BUF_SIZE)

it's a really great idea, especially when it is combined with the defaultdict to split big data into smaller ones.
I would recommend to use .read() not .readlines(). If the file is binary it's not going to have line breaks.
What if the file is one huge string?
This solution is buggy. If one of the lines is larger than your BUF_SIZE, you are going to process an incomplete line. @MattSom is correct.
@MyersCarpenter Will that line be repeated twice? tmp_lines = bigfile.readlines(BUF_SIZE)
u
user48678

There are already many good answers, but if your entire file is on a single line and you still want to process "rows" (as opposed to fixed-size blocks), these answers will not help you.

99% of the time, it is possible to process files line by line. Then, as suggested in this answer, you can to use the file object itself as lazy generator:

with open('big.csv') as f:
    for line in f:
        process(line)

However, one may run into very big files where the row separator is not '\n' (a common case is '|').

Converting '|' to '\n' before processing may not be an option because it can mess up fields which may legitimately contain '\n' (e.g. free text user input).

Using the csv library is also ruled out because the fact that, at least in early versions of the lib, it is hardcoded to read the input line by line.

For these kind of situations, I created the following snippet [Updated in May 2021 for Python 3.8+]:

def rows(f, chunksize=1024, sep='|'):
    """
    Read a file where the row separator is '|' lazily.

    Usage:

    >>> with open('big.csv') as f:
    >>>     for r in rows(f):
    >>>         process(r)
    """
    row = ''
    while (chunk := f.read(chunksize)) != '':   # End of file
        while (i := chunk.find(sep)) != -1:     # No separator found
            yield row + chunk[:i]
            chunk = chunk[i+1:]
            row = ''
        row += chunk
    yield row

[For older versions of python]:

def rows(f, chunksize=1024, sep='|'):
    """
    Read a file where the row separator is '|' lazily.

    Usage:

    >>> with open('big.csv') as f:
    >>>     for r in rows(f):
    >>>         process(r)
    """
    curr_row = ''
    while True:
        chunk = f.read(chunksize)
        if chunk == '': # End of file
            yield curr_row
            break
        while True:
            i = chunk.find(sep)
            if i == -1:
                break
            yield curr_row + chunk[:i]
            curr_row = ''
            chunk = chunk[i+1:]
        curr_row += chunk

I was able to use it successfully to solve various problems. It has been extensively tested, with various chunk sizes. Here is the test suite I am using, for those who need to convince themselves:

test_file = 'test_file'

def cleanup(func):
    def wrapper(*args, **kwargs):
        func(*args, **kwargs)
        os.unlink(test_file)
    return wrapper

@cleanup
def test_empty(chunksize=1024):
    with open(test_file, 'w') as f:
        f.write('')
    with open(test_file) as f:
        assert len(list(rows(f, chunksize=chunksize))) == 1

@cleanup
def test_1_char_2_rows(chunksize=1024):
    with open(test_file, 'w') as f:
        f.write('|')
    with open(test_file) as f:
        assert len(list(rows(f, chunksize=chunksize))) == 2

@cleanup
def test_1_char(chunksize=1024):
    with open(test_file, 'w') as f:
        f.write('a')
    with open(test_file) as f:
        assert len(list(rows(f, chunksize=chunksize))) == 1

@cleanup
def test_1025_chars_1_row(chunksize=1024):
    with open(test_file, 'w') as f:
        for i in range(1025):
            f.write('a')
    with open(test_file) as f:
        assert len(list(rows(f, chunksize=chunksize))) == 1

@cleanup
def test_1024_chars_2_rows(chunksize=1024):
    with open(test_file, 'w') as f:
        for i in range(1023):
            f.write('a')
        f.write('|')
    with open(test_file) as f:
        assert len(list(rows(f, chunksize=chunksize))) == 2

@cleanup
def test_1025_chars_1026_rows(chunksize=1024):
    with open(test_file, 'w') as f:
        for i in range(1025):
            f.write('|')
    with open(test_file) as f:
        assert len(list(rows(f, chunksize=chunksize))) == 1026

@cleanup
def test_2048_chars_2_rows(chunksize=1024):
    with open(test_file, 'w') as f:
        for i in range(1022):
            f.write('a')
        f.write('|')
        f.write('a')
        # -- end of 1st chunk --
        for i in range(1024):
            f.write('a')
        # -- end of 2nd chunk
    with open(test_file) as f:
        assert len(list(rows(f, chunksize=chunksize))) == 2

@cleanup
def test_2049_chars_2_rows(chunksize=1024):
    with open(test_file, 'w') as f:
        for i in range(1022):
            f.write('a')
        f.write('|')
        f.write('a')
        # -- end of 1st chunk --
        for i in range(1024):
            f.write('a')
        # -- end of 2nd chunk
        f.write('a')
    with open(test_file) as f:
        assert len(list(rows(f, chunksize=chunksize))) == 2

if __name__ == '__main__':
    for chunksize in [1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024]:
        test_empty(chunksize)
        test_1_char_2_rows(chunksize)
        test_1_char(chunksize)
        test_1025_chars_1_row(chunksize)
        test_1024_chars_2_rows(chunksize)
        test_1025_chars_1026_rows(chunksize)
        test_2048_chars_2_rows(chunksize)
        test_2049_chars_2_rows(chunksize)

C
Community

If your computer, OS and python are 64-bit, then you can use the mmap module to map the contents of the file into memory and access it with indices and slices. Here an example from the documentation:

import mmap
with open("hello.txt", "r+") as f:
    # memory-map the file, size 0 means whole file
    map = mmap.mmap(f.fileno(), 0)
    # read content via standard file methods
    print map.readline()  # prints "Hello Python!"
    # read content via slice notation
    print map[:5]  # prints "Hello"
    # update content using slice notation;
    # note that new content must have same size
    map[6:] = " world!\n"
    # ... and read again using standard file methods
    map.seek(0)
    print map.readline()  # prints "Hello  world!"
    # close the map
    map.close()

If either your computer, OS or python are 32-bit, then mmap-ing large files can reserve large parts of your address space and starve your program of memory.


How is this supposed to work? What if I have a 32GB file? What if I'm on a VM with 256MB RAM? Mmapping such a huge file is really never a good thing.
This answer deserve a -12 vote . THis will kill anyone using that for big files.
This can work on a 64-bit Python even for big files. Even though the file is memory-mapped, it's not read to memory, so the amount of physical memory can be much smaller than the file size.
@SavinoSguera does the size of physical memory matter with mmaping a file?
@V3ss0n: I've tried to mmap 32GB file on 64-bit Python. It works (I have RAM less than 32GB): I can access the start, the middle, and the end of the file using both Sequence and file interfaces.
C
Community
f = ... # file-like object, i.e. supporting read(size) function and 
        # returning empty string '' when there is nothing to read

def chunked(file, chunk_size):
    return iter(lambda: file.read(chunk_size), '')

for data in chunked(f, 65536):
    # process the data

UPDATE: The approach is best explained in https://stackoverflow.com/a/4566523/38592


This works well for blobs, but may not be good for line separated content (like CSV, HTML, etc where processing needs to be handled line by line)
excuse me. what is the value of f ?
@user1, it can be open('filename')
B
Boris Verkhovskiy

In Python 3.8+ you can use .read() in a while loop:

with open("somefile.txt") as f:
    while chunk := f.read(8192):
        do_something(chunk)

Of course, you can use any chunk size you want, you don't have to use 8192 (2**13) bytes. Unless your file's size happens to be a multiple of your chunk size, the last chunk will be smaller than your chunk size.


b
bruce

Refer to python's official documentation https://docs.python.org/3/library/functions.html#iter

Maybe this method is more pythonic:

"""A file object returned by open() is a iterator with
read method which could specify current read's block size
"""
with open('mydata.db', 'r') as f_in:
    block_read = partial(f_in.read, 1024 * 1024)
    block_iterator = iter(block_read, '')

    for index, block in enumerate(block_iterator, start=1):
        block = process_block(block)  # process your block data

        with open(f'{index}.txt', 'w') as f_out:
            f_out.write(block)

Bruce is correct. I use functools.partial to parse video streams. With py;py3, I can parse over 1GB a second . ` for pkt in iter(partial(vid.read, PACKET_SIZE ), b""):`
T
TonyCoolZhu

I think we can write like this:

def read_file(path, block_size=1024): 
    with open(path, 'rb') as f: 
        while True: 
            piece = f.read(block_size) 
            if piece: 
                yield piece 
            else: 
                return

for piece in read_file(path):
    process_piece(piece)

s
sinzi

i am not allowed to comment due to my low reputation, but SilentGhosts solution should be much easier with file.readlines([sizehint])

python file methods

edit: SilentGhost is right, but this should be better than:

s = "" 
for i in xrange(100): 
   s += file.next()

ok, sorry, you are absolutely right. but maybe this solution will make you happier ;) : s = "" for i in xrange(100): s += file.next()
-1: Terrible solution, this would mean creating a new string in memory each line, and copying the entire file data read to the new string. The worst performance and memory.
why would it copy the entire file data into a new string? from the python documentation: In order to make a for loop the most efficient way of looping over the lines of a file (a very common operation), the next() method uses a hidden read-ahead buffer.
@sinzi: "s +=" or concatenating strings makes a new copy of the string each time, since the string is immutable, so you are creating a new string.
@nosklo: these are details of implementation, list comprehension can be used in it's place
J
Jason Plank

I'm in a somewhat similar situation. It's not clear whether you know chunk size in bytes; I usually don't, but the number of records (lines) that is required is known:

def get_line():
     with open('4gb_file') as file:
         for i in file:
             yield i

lines_required = 100
gen = get_line()
chunk = [i for i, j in zip(gen, range(lines_required))]

Update: Thanks nosklo. Here's what I meant. It almost works, except that it loses a line 'between' chunks.

chunk = [next(gen) for i in range(lines_required)]

Does the trick w/o losing any lines, but it doesn't look very nice.


is this pseudo code? it won't work. It is also needless confusing, you should make the number of lines an optional parameter to the get_line function.
S
Shrikant

you can use following code.

file_obj = open('big_file') 

open() returns a file object

then use os.stat for getting size

file_size = os.stat('big_file').st_size

for i in range( file_size/1024):
    print file_obj.read(1024)

wouldn't read the whole file if size isn't a multiply of 1024