ChatGPT解决这个技术问题 Extra ChatGPT

How to apply a function to two columns of Pandas dataframe

Suppose I have a df which has columns of 'ID', 'col_1', 'col_2'. And I define a function :

f = lambda x, y : my_function_expression.

Now I want to apply the f to df's two columns 'col_1', 'col_2' to element-wise calculate a new column 'col_3' , somewhat like :

df['col_3'] = df[['col_1','col_2']].apply(f)  
# Pandas gives : TypeError: ('<lambda>() takes exactly 2 arguments (1 given)'

How to do ?

** Add detail sample as below ***

import pandas as pd

df = pd.DataFrame({'ID':['1','2','3'], 'col_1': [0,2,3], 'col_2':[1,4,5]})
mylist = ['a','b','c','d','e','f']

def get_sublist(sta,end):
    return mylist[sta:end+1]

#df['col_3'] = df[['col_1','col_2']].apply(get_sublist,axis=1)
# expect above to output df as below 

  ID  col_1  col_2            col_3
0  1      0      1       ['a', 'b']
1  2      2      4  ['c', 'd', 'e']
2  3      3      5  ['d', 'e', 'f']
I found a related Q&A at below url, but my issue is calculating a new column by two existing columns, not 2 from 1 . stackoverflow.com/questions/12356501/…

a
ajrwhite

There is a clean, one-line way of doing this in Pandas:

df['col_3'] = df.apply(lambda x: f(x.col_1, x.col_2), axis=1)

This allows f to be a user-defined function with multiple input values, and uses (safe) column names rather than (unsafe) numeric indices to access the columns.

Example with data (based on original question):

import pandas as pd

df = pd.DataFrame({'ID':['1', '2', '3'], 'col_1': [0, 2, 3], 'col_2':[1, 4, 5]})
mylist = ['a', 'b', 'c', 'd', 'e', 'f']

def get_sublist(sta,end):
    return mylist[sta:end+1]

df['col_3'] = df.apply(lambda x: get_sublist(x.col_1, x.col_2), axis=1)

Output of print(df):

  ID  col_1  col_2      col_3
0  1      0      1     [a, b]
1  2      2      4  [c, d, e]
2  3      3      5  [d, e, f]

If your column names contain spaces or share a name with an existing dataframe attribute, you can index with square brackets:

df['col_3'] = df.apply(lambda x: f(x['col 1'], x['col 2']), axis=1)

Note, if using axis=1 and you column is called name it will not actually return your column data but the index. Similar as to getting the name in a groupby(). I solved this by renaming my column.
THIS IS IT! I just didn't realize you could insert user-defined functions with multiple input parameters into lambdas. It's important to note (I think) that you're using DF.apply() rather than Series.apply(). This lets you index the df using the two columns you want, and pass the entire column into the function, but because you're using apply(), it applies the function in an element-wise fashion down the whole column. Brilliant! Thank you for posting!
FINALLY! You saved my day!
I believe the suggested way to do this is df.loc[:, 'new col'] = df.apply.....
@Mez13 you can also use f(x['col 1'], x['col 2']) style indexing if necessary (e.g. if your column names have spaces or protected names).
A
Aman

Here's an example using apply on the dataframe, which I am calling with axis = 1.

Note the difference is that instead of trying to pass two values to the function f, rewrite the function to accept a pandas Series object, and then index the Series to get the values needed.

In [49]: df
Out[49]: 
          0         1
0  1.000000  0.000000
1 -0.494375  0.570994
2  1.000000  0.000000
3  1.876360 -0.229738
4  1.000000  0.000000

In [50]: def f(x):    
   ....:  return x[0] + x[1]  
   ....:  

In [51]: df.apply(f, axis=1) #passes a Series object, row-wise
Out[51]: 
0    1.000000
1    0.076619
2    1.000000
3    1.646622
4    1.000000

Depending on your use case, it is sometimes helpful to create a pandas group object, and then use apply on the group.


Yes, i tried to use apply, but can't find the valid syntax expression. And if each row of df is unique, still use groupby?
Added an example to my answer, hope this does what you're looking for. If not, please provide a more specific example function since sum is solved successfully by any of the methods suggested so far.
Would you pls paste your code ? I rewrite the function: def get_sublist(x): return mylist[x[1]:x[2] + 1] and df['col_3'] = df.apply(get_sublist, axis=1) gives 'ValueError: operands could not be broadcast together with shapes (2) (3)'
@Aman: with Pandas version 0.14.1 (and possibly earlier), use can use a lambda expression as well. Give the df object you defined, another approach (with equivalent results) is df.apply(lambda x: x[0] + x[1], axis = 1).
@CanCeylan you can just use the column names in the function instead of indexes then you don't need to worry about order changing, or get the index by name e.g. see stackoverflow.com/questions/13021654/…
s
sjm

A simple solution is:

df['col_3'] = df[['col_1','col_2']].apply(lambda x: f(*x), axis=1)

how is this answer different to the approach in thequestion: df['col_3'] = df[['col_1','col_2']].apply(f) just to confirm, the approach in the question didn't work because the poster did not specify this axis=1, the default is axis = 0?
This answer is comparable to @Anman's answer but a bit slicker. He is constructing an anonymous function which takes an iterable, and unpacks it before passing it to function f.
This method is twice faster in my case, with 100k rows (compared to df.apply(lambda x: f(x.col_1, x.col_2), axis=1))
most elegant answer here.
@sjm Nice! But what if the arguments of x are a mixture of args and kwargs etc.?
佚名

A interesting question! my answer as below:

import pandas as pd

def sublst(row):
    return lst[row['J1']:row['J2']]

df = pd.DataFrame({'ID':['1','2','3'], 'J1': [0,2,3], 'J2':[1,4,5]})
print df
lst = ['a','b','c','d','e','f']

df['J3'] = df.apply(sublst,axis=1)
print df

Output:

  ID  J1  J2
0  1   0   1
1  2   2   4
2  3   3   5
  ID  J1  J2      J3
0  1   0   1     [a]
1  2   2   4  [c, d]
2  3   3   5  [d, e]

I changed the column name to ID,J1,J2,J3 to ensure ID < J1 < J2 < J3, so the column display in right sequence.

One more brief version:

import pandas as pd

df = pd.DataFrame({'ID':['1','2','3'], 'J1': [0,2,3], 'J2':[1,4,5]})
print df
lst = ['a','b','c','d','e','f']

df['J3'] = df.apply(lambda row:lst[row['J1']:row['J2']],axis=1)
print df

axis=1 is what I was after thanks.
J
JoeCondron

The method you are looking for is Series.combine. However, it seems some care has to be taken around datatypes. In your example, you would (as I did when testing the answer) naively call

df['col_3'] = df.col_1.combine(df.col_2, func=get_sublist)

However, this throws the error:

ValueError: setting an array element with a sequence.

My best guess is that it seems to expect the result to be of the same type as the series calling the method (df.col_1 here). However, the following works:

df['col_3'] = df.col_1.astype(object).combine(df.col_2, func=get_sublist)

df

   ID   col_1   col_2   col_3
0   1   0   1   [a, b]
1   2   2   4   [c, d, e]
2   3   3   5   [d, e, f]

T
Ted Petrou

Returning a list from apply is a dangerous operation as the resulting object is not guaranteed to be either a Series or a DataFrame. And exceptions might be raised in certain cases. Let's walk through a simple example:

df = pd.DataFrame(data=np.random.randint(0, 5, (5,3)),
                  columns=['a', 'b', 'c'])
df
   a  b  c
0  4  0  0
1  2  0  1
2  2  2  2
3  1  2  2
4  3  0  0

There are three possible outcomes with returning a list from apply

1) If the length of the returned list is not equal to the number of columns, then a Series of lists is returned.

df.apply(lambda x: list(range(2)), axis=1)  # returns a Series
0    [0, 1]
1    [0, 1]
2    [0, 1]
3    [0, 1]
4    [0, 1]
dtype: object

2) When the length of the returned list is equal to the number of columns then a DataFrame is returned and each column gets the corresponding value in the list.

df.apply(lambda x: list(range(3)), axis=1) # returns a DataFrame
   a  b  c
0  0  1  2
1  0  1  2
2  0  1  2
3  0  1  2
4  0  1  2

3) If the length of the returned list equals the number of columns for the first row but has at least one row where the list has a different number of elements than number of columns a ValueError is raised.

i = 0
def f(x):
    global i
    if i == 0:
        i += 1
        return list(range(3))
    return list(range(4))

df.apply(f, axis=1) 
ValueError: Shape of passed values is (5, 4), indices imply (5, 3)

Answering the problem without apply

Using apply with axis=1 is very slow. It is possible to get much better performance (especially on larger datasets) with basic iterative methods.

Create larger dataframe

df1 = df.sample(100000, replace=True).reset_index(drop=True)

Timings

# apply is slow with axis=1
%timeit df1.apply(lambda x: mylist[x['col_1']: x['col_2']+1], axis=1)
2.59 s ± 76.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

# zip - similar to @Thomas
%timeit [mylist[v1:v2+1] for v1, v2 in zip(df1.col_1, df1.col_2)]  
29.5 ms ± 534 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

@Thomas answer

%timeit list(map(get_sublist, df1['col_1'],df1['col_2']))
34 ms ± 459 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

It nice to see so detailed answers from where it's possible to learn.
For the latest pandas version(1.3.1), returned list is preserved and all three examples above works fine. All the result will be pd.Series with dtype='object'. BUT pd.apply(f, axis=0) works similar to the above. It's strange the pd.DataFrame.apply breaks the symmetry which means df.T.apply(f, axis=0).T is not always the same with df.apply(f, axis=1). For example, when f = lambda x: list(range(2)), df.T.apply(f, axis=0).T and df.apply(f, axis=1) are not the same.
T
Trae Wallace

I'm going to put in a vote for np.vectorize. It allows you to just shoot over x number of columns and not deal with the dataframe in the function, so it's great for functions you don't control or doing something like sending 2 columns and a constant into a function (i.e. col_1, col_2, 'foo').

import numpy as np
import pandas as pd

df = pd.DataFrame({'ID':['1','2','3'], 'col_1': [0,2,3], 'col_2':[1,4,5]})
mylist = ['a','b','c','d','e','f']

def get_sublist(sta,end):
    return mylist[sta:end+1]

#df['col_3'] = df[['col_1','col_2']].apply(get_sublist,axis=1)
# expect above to output df as below 

df.loc[:,'col_3'] = np.vectorize(get_sublist, otypes=["O"]) (df['col_1'], df['col_2'])


df

ID  col_1   col_2   col_3
0   1   0   1   [a, b]
1   2   2   4   [c, d, e]
2   3   3   5   [d, e, f]

This doesn't really answer the question using pandas.
The question is "How to apply a function to two columns of Pandas dataframe" not "How to apply a function to two columns of Pandas dataframe using only Pandas methods" and numpy is a dependency of Pandas so you have to have it installed anyway, so this seems like a strange objection.
N
Nitin

The way you have written f it needs two inputs. If you look at the error message it says you are not providing two inputs to f, just one. The error message is correct. The mismatch is because df[['col1','col2']] returns a single dataframe with two columns, not two separate columns.

You need to change your f so that it takes a single input, keep the above data frame as input, then break it up into x,y inside the function body. Then do whatever you need and return a single value.

You need this function signature because the syntax is .apply(f) So f needs to take the single thing = dataframe and not two things which is what your current f expects.

Since you haven't provided the body of f I can't help in anymore detail - but this should provide the way out without fundamentally changing your code or using some other methods rather than apply


T
Thomas

I'm sure this isn't as fast as the solutions using Pandas or Numpy operations, but if you don't want to rewrite your function you can use map. Using the original example data -

import pandas as pd

df = pd.DataFrame({'ID':['1','2','3'], 'col_1': [0,2,3], 'col_2':[1,4,5]})
mylist = ['a','b','c','d','e','f']

def get_sublist(sta,end):
    return mylist[sta:end+1]

df['col_3'] = list(map(get_sublist,df['col_1'],df['col_2']))
#In Python 2 don't convert above to list

We could pass as many arguments as we wanted into the function this way. The output is what we wanted

ID  col_1  col_2      col_3
0  1      0      1     [a, b]
1  2      2      4  [c, d, e]
2  3      3      5  [d, e, f]

This is actually much faster those answers that use apply with axis=1
This is 4 years later, but such a fast idiom compared to apply! Thanks from the future.
R
Rivers

Here is a faster solution:

def func_1(a,b):
    return a + b

df["C"] = func_1(df["A"].to_numpy(),df["B"].to_numpy())

This is 380 times faster than df.apply(f, axis=1) from @Aman and 310 times faster than df['col_3'] = df.apply(lambda x: f(x.col_1, x.col_2), axis=1) from @ajrwhite.

I add some benchmarks too:

Results:

  FUNCTIONS   TIMINGS   GAIN
apply lambda    0.7     x 1
apply           0.56    x 1.25
map             0.3     x 2.3
np.vectorize    0.01    x 70
f3 on Series    0.0026  x 270
f3 on np arrays 0.0018  x 380
f3 numba        0.0018  x 380

In short:

Using apply is slow. We can speed up things very simply, just by using a function that will operate directly on Pandas Series (or better on numpy arrays). And because we will operate on Pandas Series or numpy arrays, we will be able to vectorize the operations. The function will return a Pandas Series or numpy array that we will assign as a new column.

And here is the benchmark code:

import timeit

timeit_setup = """
import pandas as pd
import numpy as np
import numba

np.random.seed(0)

# Create a DataFrame of 10000 rows with 2 columns "A" and "B" 
# containing integers between 0 and 100
df = pd.DataFrame(np.random.randint(0,10,size=(10000, 2)), columns=["A", "B"])

def f1(a,b):
    # Here a and b are the values of column A and B for a specific row: integers
    return a + b

def f2(x):
    # Here, x is pandas Series, and corresponds to a specific row of the DataFrame
    # 0 and 1 are the indexes of columns A and B
    return x[0] + x[1]  

def f3(a,b):
    # Same as f1 but we will pass parameters that will allow vectorization
    # Here, A and B will be Pandas Series or numpy arrays
    # with df["C"] = f3(df["A"],df["B"]): Pandas Series
    # with df["C"] = f3(df["A"].to_numpy(),df["B"].to_numpy()): numpy arrays
    return a + b

@numba.njit('int64[:](int64[:], int64[:])')
def f3_numba_vectorize(a,b):
    # Here a and b are 2 numpy arrays with dtype int64
    # This function must return a numpy array whith dtype int64
    return a + b

"""

test_functions = [
'df["C"] = df.apply(lambda row: f1(row["A"], row["B"]), axis=1)',
'df["C"] = df.apply(f2, axis=1)',
'df["C"] = list(map(f3,df["A"],df["B"]))',
'df["C"] = np.vectorize(f3) (df["A"].to_numpy(),df["B"].to_numpy())',
'df["C"] = f3(df["A"],df["B"])',
'df["C"] = f3(df["A"].to_numpy(),df["B"].to_numpy())',
'df["C"] = f3_numba_vectorize(df["A"].to_numpy(),df["B"].to_numpy())'
]


for test_function in test_functions:
    print(min(timeit.repeat(setup=timeit_setup, stmt=test_function, repeat=7, number=10)))

Output:

0.7
0.56
0.3
0.01
0.0026
0.0018
0.0018

Final note: things could be optimzed with Cython and other numba tricks too.


C
Casimir

Another option is df.itertuples() (generally faster and recommended over df.iterrows() by docs and user testing):

import pandas as pd

df = pd.DataFrame([range(4) for _ in range(4)], columns=list("abcd"))

df
    a   b   c   d
0   0   1   2   3
1   0   1   2   3
2   0   1   2   3
3   0   1   2   3


df["e"] = [sum(row) for row in df[["b", "d"]].itertuples(index=False)]

df
    a   b   c   d   e
0   0   1   2   3   4
1   0   1   2   3   4
2   0   1   2   3   4
3   0   1   2   3   4

Since itertuples returns an Iterable of namedtuples, you can access tuple elements both as attributes by column name (aka dot notation) and by index:

b, d = row
b = row.b
d = row[1]

From my experience, itertuples is sometimes much faster than df.apply(..., axis=1). For large tables I have seen the time going from around 3 minutes (using apply) down to 10 seconds (using itertuples. Personally I also think itertuples is sometimes more readable; it reads like pseudocode. Note that elements of the tuples can be accessed either by name or position (i.e, in the answer above where index=False, row.b is equivalent to row[0]).
Q
Qing Liu

My example to your questions:

def get_sublist(row, col1, col2):
    return mylist[row[col1]:row[col2]+1]
df.apply(get_sublist, axis=1, col1='col_1', col2='col_2')

a
allenyllee

I suppose you don't want to change get_sublist function, and just want to use DataFrame's apply method to do the job. To get the result you want, I've wrote two help functions: get_sublist_list and unlist. As the function name suggest, first get the list of sublist, second extract that sublist from that list. Finally, We need to call apply function to apply those two functions to the df[['col_1','col_2']] DataFrame subsequently.

import pandas as pd

df = pd.DataFrame({'ID':['1','2','3'], 'col_1': [0,2,3], 'col_2':[1,4,5]})
mylist = ['a','b','c','d','e','f']

def get_sublist(sta,end):
    return mylist[sta:end+1]

def get_sublist_list(cols):
    return [get_sublist(cols[0],cols[1])]

def unlist(list_of_lists):
    return list_of_lists[0]

df['col_3'] = df[['col_1','col_2']].apply(get_sublist_list,axis=1).apply(unlist)

df

If you don't use [] to enclose the get_sublist function, then the get_sublist_list function will return a plain list, it'll raise ValueError: could not broadcast input array from shape (3) into shape (2), as @Ted Petrou had mentioned.


d
durjoy

If you have a huge data-set, then you can use an easy but faster(execution time) way of doing this using swifter:

import pandas as pd
import swifter

def fnc(m,x,c):
    return m*x+c

df = pd.DataFrame({"m": [1,2,3,4,5,6], "c": [1,1,1,1,1,1], "x":[5,3,6,2,6,1]})
df["y"] = df.swifter.apply(lambda x: fnc(x.m, x.x, x.c), axis=1)

R
Rahul

It can be done in two simple ways: Let's say, we want sum of col1 and col2 in output column named col_sum

Method 1

f = lambda x : x.col1 + x.col2
df['col_sum'] = df.apply(f, axis=1)

Method 2

def f(x):
    x['col_sum'] = x.col_1 + col_2
    return x
df = df.apply(f, axis=1)

Method 2 should be used when some complex function has to applied to the dataframe. Method 2 can also be used when output in multiple columns is required.