ChatGPT解决这个技术问题 Extra ChatGPT

Convert list of dictionaries to a pandas DataFrame

How can I convert a list of dictionaries into a DataFrame? Given:

[{'points': 50, 'time': '5:00', 'year': 2010}, 
 {'points': 25, 'time': '6:00', 'month': "february"}, 
 {'points':90, 'time': '9:00', 'month': 'january'}, 
 {'points_h1':20, 'month': 'june'}]

I want to turn the above into a DataFrame:

      month  points  points_h1  time  year
0       NaN      50        NaN  5:00  2010
1  february      25        NaN  6:00   NaN
2   january      90        NaN  9:00   NaN
3      june     NaN         20   NaN   NaN

Note: Order of the columns does not matter.


M
Mateen Ulhaq

If ds is a list of dicts:

df = pd.DataFrame(ds)

Note: this does not work with nested data.


How might one use one of the key/value pairs as the index (eg. time)?
@CatsLoveJazz You can just do df = df.set_index('time') afterwards
@CatsLoveJazz No, that is not possible when converting from a dict.
As of Pandas 0.19.2, there's no mention of this in the documentation, at least not in the docs for pandas.DataFrame
Mind that for a nested dictionary '{"":{"... you use the json_normalize approach, see the detailed answer of @cs95
c
cs95

How do I convert a list of dictionaries to a pandas DataFrame?

The other answers are correct, but not much has been explained in terms of advantages and limitations of these methods. The aim of this post will be to show examples of these methods under different situations, discuss when to use (and when not to use), and suggest alternatives.

DataFrame(), DataFrame.from_records(), and .from_dict()

Depending on the structure and format of your data, there are situations where either all three methods work, or some work better than others, or some don't work at all.

Consider a very contrived example.

np.random.seed(0)
data = pd.DataFrame(
    np.random.choice(10, (3, 4)), columns=list('ABCD')).to_dict('r')

print(data)
[{'A': 5, 'B': 0, 'C': 3, 'D': 3},
 {'A': 7, 'B': 9, 'C': 3, 'D': 5},
 {'A': 2, 'B': 4, 'C': 7, 'D': 6}]

This list consists of "records" with every keys present. This is the simplest case you could encounter.

# The following methods all produce the same output.
pd.DataFrame(data)
pd.DataFrame.from_dict(data)
pd.DataFrame.from_records(data)

   A  B  C  D
0  5  0  3  3
1  7  9  3  5
2  2  4  7  6

Word on Dictionary Orientations: orient='index'/'columns'

Before continuing, it is important to make the distinction between the different types of dictionary orientations, and support with pandas. There are two primary types: "columns", and "index".

orient='columns'
Dictionaries with the "columns" orientation will have their keys correspond to columns in the equivalent DataFrame.

For example, data above is in the "columns" orient.

data_c = [
 {'A': 5, 'B': 0, 'C': 3, 'D': 3},
 {'A': 7, 'B': 9, 'C': 3, 'D': 5},
 {'A': 2, 'B': 4, 'C': 7, 'D': 6}]
pd.DataFrame.from_dict(data_c, orient='columns')

   A  B  C  D
0  5  0  3  3
1  7  9  3  5
2  2  4  7  6

Note: If you are using pd.DataFrame.from_records, the orientation is assumed to be "columns" (you cannot specify otherwise), and the dictionaries will be loaded accordingly.

orient='index'
With this orient, keys are assumed to correspond to index values. This kind of data is best suited for pd.DataFrame.from_dict.

data_i ={
 0: {'A': 5, 'B': 0, 'C': 3, 'D': 3},
 1: {'A': 7, 'B': 9, 'C': 3, 'D': 5},
 2: {'A': 2, 'B': 4, 'C': 7, 'D': 6}}
pd.DataFrame.from_dict(data_i, orient='index')

   A  B  C  D
0  5  0  3  3
1  7  9  3  5
2  2  4  7  6

This case is not considered in the OP, but is still useful to know.

Setting Custom Index

If you need a custom index on the resultant DataFrame, you can set it using the index=... argument.

pd.DataFrame(data, index=['a', 'b', 'c'])
# pd.DataFrame.from_records(data, index=['a', 'b', 'c'])

   A  B  C  D
a  5  0  3  3
b  7  9  3  5
c  2  4  7  6

This is not supported by pd.DataFrame.from_dict.

Dealing with Missing Keys/Columns

All methods work out-of-the-box when handling dictionaries with missing keys/column values. For example,

data2 = [
     {'A': 5, 'C': 3, 'D': 3},
     {'A': 7, 'B': 9, 'F': 5},
     {'B': 4, 'C': 7, 'E': 6}]
# The methods below all produce the same output.
pd.DataFrame(data2)
pd.DataFrame.from_dict(data2)
pd.DataFrame.from_records(data2)

     A    B    C    D    E    F
0  5.0  NaN  3.0  3.0  NaN  NaN
1  7.0  9.0  NaN  NaN  NaN  5.0
2  NaN  4.0  7.0  NaN  6.0  NaN

Reading Subset of Columns

"What if I don't want to read in every single column"? You can easily specify this using the columns=... parameter.

For example, from the example dictionary of data2 above, if you wanted to read only columns "A', 'D', and 'F', you can do so by passing a list:

pd.DataFrame(data2, columns=['A', 'D', 'F'])
# pd.DataFrame.from_records(data2, columns=['A', 'D', 'F'])

     A    D    F
0  5.0  3.0  NaN
1  7.0  NaN  5.0
2  NaN  NaN  NaN

This is not supported by pd.DataFrame.from_dict with the default orient "columns".

pd.DataFrame.from_dict(data2, orient='columns', columns=['A', 'B'])
ValueError: cannot use columns parameter with orient='columns'

Reading Subset of Rows

Not supported by any of these methods directly. You will have to iterate over your data and perform a reverse delete in-place as you iterate. For example, to extract only the 0th and 2nd rows from data2 above, you can use:

rows_to_select = {0, 2}
for i in reversed(range(len(data2))):
    if i not in rows_to_select:
        del data2[i]

pd.DataFrame(data2)
# pd.DataFrame.from_dict(data2)
# pd.DataFrame.from_records(data2)

     A    B  C    D    E
0  5.0  NaN  3  3.0  NaN
1  NaN  4.0  7  NaN  6.0

The Panacea: json_normalize for Nested Data

A strong, robust alternative to the methods outlined above is the json_normalize function which works with lists of dictionaries (records), and in addition can also handle nested dictionaries.

pd.json_normalize(data)

   A  B  C  D
0  5  0  3  3
1  7  9  3  5
2  2  4  7  6
pd.json_normalize(data2)

     A    B  C    D    E
0  5.0  NaN  3  3.0  NaN
1  NaN  4.0  7  NaN  6.0

Again, keep in mind that the data passed to json_normalize needs to be in the list-of-dictionaries (records) format.

As mentioned, json_normalize can also handle nested dictionaries. Here's an example taken from the documentation.

data_nested = [
  {'counties': [{'name': 'Dade', 'population': 12345},
                {'name': 'Broward', 'population': 40000},
                {'name': 'Palm Beach', 'population': 60000}],
   'info': {'governor': 'Rick Scott'},
   'shortname': 'FL',
   'state': 'Florida'},
  {'counties': [{'name': 'Summit', 'population': 1234},
                {'name': 'Cuyahoga', 'population': 1337}],
   'info': {'governor': 'John Kasich'},
   'shortname': 'OH',
   'state': 'Ohio'}
]
pd.json_normalize(data_nested, 
                          record_path='counties', 
                          meta=['state', 'shortname', ['info', 'governor']])

         name  population    state shortname info.governor
0        Dade       12345  Florida        FL    Rick Scott
1     Broward       40000  Florida        FL    Rick Scott
2  Palm Beach       60000  Florida        FL    Rick Scott
3      Summit        1234     Ohio        OH   John Kasich
4    Cuyahoga        1337     Ohio        OH   John Kasich

For more information on the meta and record_path arguments, check out the documentation.

Summarising

Here's a table of all the methods discussed above, along with supported features/functionality.

https://i.stack.imgur.com/SRbw9.png

* Use orient='columns' and then transpose to get the same effect as orient='index'.


Woah! Okay this along with Merging SO post belong in the API. You should contribute to the pandas documentations if you haven't already done so. Ted Petrou just posted a LinkedIn article about the popularity of pandas on Stack Overflow and mentions that lack of good documentation contributes to the volume of questions here.
@ScottBoston You're absolutely right, I've heard that enough times now that I know it is something I should give more serious thought to. I think the documentation can be a great way of helping users, more so than posting on questions that would only reach a fraction of the same audience.
It's particularly problematic because the details of which methods are good for which cases often change, and so having very lengthy, deep dive answers on SO is not only not as useful as having it in the pandas official documentation, but often is even harmful or misleading because some change to the function internals can suddenly make the answer incorrect or factually wrong and it's not clearly linked to the actual source repo to flag for documentation updating.
it is nice answer , I think it is time for us to re-walk-in those common question under the most current pandas version :-)
@ely: that's never a reason not to write answers here, anyway. Any answer can become outdated, that's what we have voting for, and different perspectives and different goals exist here, and it is always valuable to have different ways of explaining the same thing.
A
Asclepius

In pandas 16.2, I had to do pd.DataFrame.from_records(d) to get this to work.


the good thing about this approach is that it also works with deque
works fine with pandas 0.17.1 with @joris solution
Usinig 0.14.1 and @joris' solution didn't work but this did
In 0.18.1, one must use from_records if the dictionaries do not all have the same keys.
s
shivsn

You can also use pd.DataFrame.from_dict(d) as :

In [8]: d = [{'points': 50, 'time': '5:00', 'year': 2010}, 
   ...: {'points': 25, 'time': '6:00', 'month': "february"}, 
   ...: {'points':90, 'time': '9:00', 'month': 'january'}, 
   ...: {'points_h1':20, 'month': 'june'}]

In [12]: pd.DataFrame.from_dict(d)
Out[12]: 
      month  points  points_h1  time    year
0       NaN    50.0        NaN  5:00  2010.0
1  february    25.0        NaN  6:00     NaN
2   january    90.0        NaN  9:00     NaN
3      june     NaN       20.0   NaN     NaN

The question is about constructing a data frame from a list of dicts, not from a single dict as you assumed in your answer.
@a_guest check the updated answer. I'am not assuming.
C
Community

Pyhton3: Most of the solutions listed previously work. However, there are instances when row_number of the dataframe is not required and the each row (record) has to be written individually. The following method is useful in that case.

import csv

my file= 'C:\Users\John\Desktop\export_dataframe.csv'

records_to_save = data2 #used as in the thread. 


colnames = list[records_to_save[0].keys()] 
# remember colnames is a list of all keys. All values are written corresponding
# to the keys and "None" is specified in case of missing value 

with open(myfile, 'w', newline="",encoding="utf-8") as f:
    writer = csv.writer(f)
    writer.writerow(colnames)
    for d in records_to_save:
        writer.writerow([d.get(r, "None") for r in colnames])

c
cs95

The easiest way I have found to do it is like this:

dict_count = len(dict_list)
df = pd.DataFrame(dict_list[0], index=[0])
for i in range(1,dict_count-1):
    df = df.append(dict_list[i], ignore_index=True)

Avoid looping when using pandas, looping kill's the whole purpose of pandas
I didn't downvote, but while this will technically work its performance is quite poor. See this for more information.
v
vloubes

I have the following list of dicts with datetime keys and int values:

list = [{datetime.date(2022, 2, 10): 7}, {datetime.date(2022, 2, 11): 1}, {datetime.date(2022, 2, 11): 1}]

I had a problem to convert it to Dataframe with the methods above as it created Dataframe with columns with dates...

My solution:

df = pd.DataFrame()
for i in list:
    temp_df = pd.DataFrame.from_dict(i, orient='index')
    df = df.append(temp_df)

You are changing orientation of the dataframe. Selected answer will also give you dataframe in column/vertical orientation.