ChatGPT解决这个技术问题 Extra ChatGPT

How to find the size or shape of a DataFrame in PySpark?

I am trying to find out the size/shape of a DataFrame in PySpark. I do not see a single function that can do this.

In Python, I can do this:

data.shape()

Is there a similar function in PySpark? This is my current solution, but I am looking for an element one

row_number = data.count()
column_number = len(data.dtypes)

The computation of the number of columns is not ideal...

Put this in a function ?
You mean data.shape for NumPy and Pandas? shape is not a function.
What is not ideal? I am not sure what else you would like to accomplish than what you already have (except for replacing data.dtypes with data.columns, but it makes little difference).

y
yatu

You can get its shape with:

print((df.count(), len(df.columns)))

Will this work fine for larger datasets spread across nodes?
Why doesn't Pyspark Dataframe simply store the shape values like pandas dataframe does with .shape? Having to call count seems incredibly resource-intensive for such a common and simple operation.
T
Tshilidzi Mudau

Use df.count() to get the number of rows.


L
Louis Yang

Add this to the your code:

import pyspark
def spark_shape(self):
    return (self.count(), len(self.columns))
pyspark.sql.dataframe.DataFrame.shape = spark_shape

Then you can do

>>> df.shape()
(10000, 10)

But just remind you that .count() can be very slow for very large table that has not been persisted.


V
Venzu251720
print((df.count(), len(df.columns)))

is easier for smaller datasets.

However if the dataset is huge, an alternative approach would be to use pandas and arrows to convert the dataframe to pandas df and call shape

spark.conf.set("spark.sql.execution.arrow.enabled", "true")
spark.conf.set("spark.sql.crossJoin.enabled", "true")
print(df.toPandas().shape)

Isn't .toPandas an action? Meaning: isn't this going to collect the data to your master, and then call shape on it? If so, it would be inadvisable to do that, unless you're sure it's going to fit in master's memory.
If the dataset is huge, collecting to Pandas is exactly what you do NOT want to do. Btw: Why do you enable cross join for this? And does the arrow configuration help collecting to pandas?
Y
YungChun

I think there is not similar function like data.shape in Spark. But I will use len(data.columns) rather than len(data.dtypes)


that just gives you number of columns. What about number of rows?
S
Sahaj Raj Malla

I have solved this problem using this code block. Please try it, it works.

import pyspark
def sparkShape(dataFrame):
    return (dataFrame.count(), len(dataFrame.columns))
pyspark.sql.dataframe.DataFrame.shape = sparkShape

print(<Input the Dataframe name which you want the output of>.shape())

this is exactly what @Louis Yang wrote 3 years back