ChatGPT解决这个技术问题 Extra ChatGPT

Pandas 'count(distinct)' equivalent

I am using Pandas as a database substitute as I have multiple databases (Oracle, SQL Server, etc.), and I am unable to make a sequence of commands to a SQL equivalent.

I have a table loaded in a DataFrame with some columns:

YEARMONTH, CLIENTCODE, SIZE, etc., etc.

In SQL, to count the amount of different clients per year would be:

SELECT count(distinct CLIENTCODE) FROM table GROUP BY YEARMONTH;

And the result would be

201301    5000
201302    13245

How can I do that in Pandas?

I have done table.groupby(['YEARMONTH'])['CLIENTCODE'].unique() and came with two series indexed by YEARMONTH and with all the unique values. How to count the amount of values on each series?
For some, value_counts might be the answer you are looking for: pandas.pydata.org/pandas-docs/stable/generated/…

L
LondonRob

I believe this is what you want:

table.groupby('YEARMONTH').CLIENTCODE.nunique()

Example:

In [2]: table
Out[2]: 
   CLIENTCODE  YEARMONTH
0           1     201301
1           1     201301
2           2     201301
3           1     201302
4           2     201302
5           2     201302
6           3     201302

In [3]: table.groupby('YEARMONTH').CLIENTCODE.nunique()
Out[3]: 
YEARMONTH
201301       2
201302       3

What if I have multiple columns that I want to be unique together, like in .drop_duplicates(subset=['col1','col2'])?
How to access this unique count . As there is no column name
Thanks lot, I used this style on output of resample. df_watch_record.resample('M').user.nunique() counts the number of unique users who have watched movie per month.
and sort them with table.groupby('YEARMONTH').CLIENTCODE.nunique().sort_values(ascending=False)
Is it possible to apply this for multiple columns? Right now in the example, only one column is selected.
P
Peter Mortensen

Here is another method and it is much simpler. Let’s say your dataframe name is daat and the column name is YEARMONTH:

daat.YEARMONTH.value_counts()

I like this answer. How can I use this method if my column name has a '.' in it (e.g. 'ck.Class')? Thanks
daat['ck.Class'].value_counts()
This does not address the question asked.
this counting the number of observations within each group, not the unique value of a certain column each group has.
This is the incorrect answer; it does not reflect the DISTINCT requirement from the question! Moreover, it does not include counts of NaN!
j
jezrael

Interestingly enough, very often len(unique()) is a few times (3x-15x) faster than nunique().


You mean this? .CLIENTCODE.apply(lambda x: len(x.unique())), from here
@user32185 you'd have to drop it into an apply call with a lambda. For instance, df.groupby('YEARMONTH')['CLIENTCODE'].apply(lambda x: x.unique().shape[0]).
Syntax isn't completely clear, I used len(df['column'].unique()) no need for lambda function
I got TypeError: object of type 'method' has no len() from Chen's comment, 3novak's worked for me.
G
Gangaraju

I am also using nunique but it will be very helpful if you have to use an aggregate function like 'min', 'max', 'count' or 'mean' etc.

df.groupby('YEARMONTH')['CLIENTCODE'].transform('nunique') #count(distinct)
df.groupby('YEARMONTH')['CLIENTCODE'].transform('min')     #min
df.groupby('YEARMONTH')['CLIENTCODE'].transform('max')     #max
df.groupby('YEARMONTH')['CLIENTCODE'].transform('mean')    #average
df.groupby('YEARMONTH')['CLIENTCODE'].transform('count')   #count

V
Vivek Payasi

Distinct of column along with aggregations on other columns

To get the distinct number of values for any column (CLIENTCODE in your case), we can use nunique. We can pass the input as a dictionary in agg function, along with aggregations on other columns:

grp_df = df.groupby('YEARMONTH').agg({'CLIENTCODE': ['nunique'],
                                      'other_col_1': ['sum', 'count']})

# to flatten the multi-level columns
grp_df.columns = ["_".join(col).strip() for col in grp_df.columns.values]

# if you wish to reset the index
grp_df.reset_index(inplace=True)

I think this answer is the best since it is closer to the way you would use the count distinct in SQL. If you use the most recent syntax for Pandas agg you can even skip the flatten step. grp_df = df.groupby('YEARMONTH').agg(CLIENTCODE_UNIQ_CNT = ('CLIENTCODE', 'nunique'), other_col_1_sum = ('other_col_1', 'sum'), other_col_1_cnt = ('other_col_1', 'count'))
Oh nice, I wasn't aware of this new syntax. Thanks for commenting :)
P
Peter Mortensen

Using crosstab, this will return more information than groupby nunique:

pd.crosstab(df.YEARMONTH,df.CLIENTCODE)
Out[196]:
CLIENTCODE  1  2  3
YEARMONTH
201301      2  1  0
201302      1  2  1

After a little bit of modification, it yields the result:

pd.crosstab(df.YEARMONTH,df.CLIENTCODE).ne(0).sum(1)
Out[197]:
YEARMONTH
201301    2
201302    3
dtype: int64

How can I export this as two column YEARMONTH and count. Also can i set the count in descending order?
P
Peter Mortensen

Here is an approach to have count distinct over multiple columns. Let's have some data:

data = {'CLIENT_CODE':[1,1,2,1,2,2,3],
        'YEAR_MONTH':[201301,201301,201301,201302,201302,201302,201302],
        'PRODUCT_CODE': [100,150,220,400,50,80,100]
       }
table = pd.DataFrame(data)
table

CLIENT_CODE YEAR_MONTH  PRODUCT_CODE
0   1       201301      100
1   1       201301      150
2   2       201301      220
3   1       201302      400
4   2       201302      50
5   2       201302      80
6   3       201302      100

Now, list the columns of interest and use groupby in a slightly modified syntax:

columns = ['YEAR_MONTH', 'PRODUCT_CODE']
table[columns].groupby(table['CLIENT_CODE']).nunique()

We obtain:

YEAR_MONTH  PRODUCT_CODE CLIENT_CODE
1           2            3
2           2            3
3           1            1

P
Peter Mortensen

With the new Pandas version, it is easy to get as a data frame:

unique_count = pd.groupby(['YEARMONTH'], as_index=False).agg(uniq_CLIENTCODE=('CLIENTCODE', pd.Series.count))

What is the version number? Please respond by editing (changing) your answer, not here in comments (without "Edit:", "Update:", or similar - the answer should appear as if it was written today).
P
Peter Mortensen

Create a pivot table and use the nunique series function:

ID = [ 123, 123, 123, 456, 456, 456, 456, 789, 789]
domain = ['vk.com', 'vk.com', 'twitter.com', 'vk.com', 'facebook.com',
          'vk.com', 'google.com', 'twitter.com', 'vk.com']
df = pd.DataFrame({'id':ID, 'domain':domain})
fp = pd.pivot_table(data=df, index='domain', aggfunc=pd.Series.nunique)
print(fp)

Output:

               id
domain
facebook.com   1
google.com     1
twitter.com    2
vk.com         3

But the sample data does not match the question (YEARMONTH, CLIENTCODE, and SIZE). The accepted answer and most of the other answers do. This answer (in its current state) would be a better match for question Count unique values with Pandas per groups.
pivot table does the aggregation using a function.
P
Peter Mortensen

Now you are also able to use dplyr syntax in Python to do it:

>>> from datar.all import f, tibble, group_by, summarise, n_distinct
>>>
>>> data = tibble(
...     CLIENT_CODE=[1,1,2,1,2,2,3],
...     YEAR_MONTH=[201301,201301,201301,201302,201302,201302,201302]
... )
>>>
>>> data >> group_by(f.YEAR_MONTH) >> summarise(n=n_distinct(f.CLIENT_CODE))
   YEAR_MONTH       n
      <int64> <int64>
0      201301       2
1      201302       3

What is "dplyr syntax"? Can you add an (authoritative) reference to it (for context)? (But without "Edit:", "Update:", or similar - the answer should appear as if it was written today.)