ChatGPT解决这个技术问题 Extra ChatGPT

Show distinct column values in pyspark dataframe

With pyspark dataframe, how do you do the equivalent of Pandas df['col'].unique().

I want to list out all the unique values in a pyspark dataframe column.

Not the SQL type way (registertemplate then SQL query for distinct values).

Also I don't need groupby then countDistinct, instead I want to check distinct VALUES in that column.


M
MichaelChirico

This should help to get distinct values of a column:

df.select('column1').distinct().collect()

Note that .collect() doesn't have any built-in limit on how many values can return so this might be slow -- use .show() instead or add .limit(20) before .collect() to manage this.


this code returns data that's not iterable, i.e. I see the distinct data bit am not able to iterate over it in code. Any other way that enables me to do it. I tried using toPandas() to convert in it into Pandas df and then get the iterable with unique values. However, running into '' Pandas not found' error message
@Abhi: inplace of .show() instead do a .collect(), that way you will get a iterable of all the distinct values of that particular column. But make sure your master node have enough memory to keep hold of those unique values, because collect will push all the requested data(in this case unique values of column) to master Node :)
@Satya I've edited your comment into the answer, thanks
To obtain a Python list of actual values and not Row objects, you need to use a list comprehension like in this answer: stackoverflow.com/a/60896261/7465462
e
eddies

Let's assume we're working with the following representation of data (two columns, k and v, where k contains three entries, two unique:

+---+---+
|  k|  v|
+---+---+
|foo|  1|
|bar|  2|
|foo|  3|
+---+---+

With a Pandas dataframe:

import pandas as pd
p_df = pd.DataFrame([("foo", 1), ("bar", 2), ("foo", 3)], columns=("k", "v"))
p_df['k'].unique()

This returns an ndarray, i.e. array(['foo', 'bar'], dtype=object)

You asked for a "pyspark dataframe alternative for pandas df['col'].unique()". Now, given the following Spark dataframe:

s_df = sqlContext.createDataFrame([("foo", 1), ("bar", 2), ("foo", 3)], ('k', 'v'))

If you want the same result from Spark, i.e. an ndarray, use toPandas():

s_df.toPandas()['k'].unique()

Alternatively, if you don't need an ndarray specifically and just want a list of the unique values of column k:

s_df.select('k').distinct().rdd.map(lambda r: r[0]).collect()

Finally, you can also use a list comprehension as follows:

[i.k for i in s_df.select('k').distinct().collect()]

Hi eddies, the last code line distinct().map() didn't worked for me. Error:AttributeError: 'DataFrame' object has no attribute 'map'. I am on spark 2.0. And toPandas thing, i will not say it is an alternative, it converts spark dataframe to pandas dataframe first then doing pandas operation on it.
Hi satya. Just updated the answer by adding a .rdd call after distinct(). It worked without that in Spark 1.6.2, but I just confirmed that the edited answer works in Spark 2.0.0 as well.
Why try to avoid spark dataframe operations by converting to a pandas dataframe (hurts if its gigantic) or utilizing rdd operations when spark dataframes are perfectly capable of doing this? see below answer of @Pabbati
@Laurens There were three solutions in the answer above, depending on what the poster really wanted. In all cases, the poster wanted some form of a list/array of the distinct values (c.f. poster's response to seufagner's answer). The third solution above does use Spark's dataframe api just as Pabbati's answer but actually returns a list, as per the poster's requirements.
Yes, the question title includes the word "show". But the poster specifically clarified that SEEing the results wasn't adequate and wanted a list. As mentioned above, see the poster's comment to seufagner's answer.
s
seufagner

You can use df.dropDuplicates(['col1','col2']) to get only distinct rows based on colX in the array.


@seufagner-yes I can do a df.dropDuplictes(['col1']) to see (mark SEE ) the unique values, but without a collect(to_rdd or to pandas DF then df['col'].unique()), I can't get the unique values list. Thanks for suggestion.
The user did not ask how to display non duplicate values.. He just wanted to get a list of all unique/distinct items, which includes duplicates too!
This is correct because df.select().collect() is an expensive operation which may lead to stage failure error.
J
Jaroslav Bezděk

If you want to see the distinct values of a specific column in your dataframe, you would just need to write the following code. It would show the 100 distinct values (if 100 values are available) for the colname column in the df dataframe.

df.select('colname').distinct().show(100, False)

If you want to do something fancy on the distinct values, you can save the distinct values in a vector:

a = df.select('colname').distinct()

M
Mykola Zotko

collect_set can help to get unique values from a given column of pyspark.sql.DataFrame:

df.select(F.collect_set("column").alias("column")).first()["column"]

I agree on using collect_set, but I think the following would be cleaner: df.agg(F.collect_set("column")).collect()[0][0]
m
muon

you could do

distinct_column = 'somecol' 

distinct_column_vals = df.select(distinct_column).distinct().collect()
distinct_column_vals = [v[distinct_column] for v in distinct_column_vals]

a
ansev

In addition to the dropDuplicates option there is the method named as we know it in pandas drop_duplicates:

drop_duplicates() is an alias for dropDuplicates().

Example

s_df = sqlContext.createDataFrame([("foo", 1),
                                   ("foo", 1),
                                   ("bar", 2),
                                   ("foo", 3)], ('k', 'v'))
s_df.show()

+---+---+
|  k|  v|
+---+---+
|foo|  1|
|foo|  1|
|bar|  2|
|foo|  3|
+---+---+

Drop by subset

s_df.drop_duplicates(subset = ['k']).show()

+---+---+
|  k|  v|
+---+---+
|bar|  2|
|foo|  1|
+---+---+
s_df.drop_duplicates().show()


+---+---+
|  k|  v|
+---+---+
|bar|  2|
|foo|  3|
|foo|  1|
+---+---+

K
Kapil Sharma

If you want to select ALL(columns) data as distinct frrom a DataFrame (df), then

df.select('*').distinct().show(10,truncate=False)


J
Joseph Jacob

Run this first

df.createOrReplaceTempView('df')

Then run

spark.sql("""
    SELECT distinct
        column name
    FROM
        df
    """).show()

M
Marioanzas

Let us suppose that your original DataFrame is called df. Then, you can use:

df1 = df.groupBy('column_1').agg(F.count('column_1').alias('trip_count'))
df2 = df1.sort(df1.trip_count.desc()).show()