ChatGPT解决这个技术问题 Extra ChatGPT

Pyspark: display a spark data frame in a table format

I am using pyspark to read a parquet file like below:

my_df = sqlContext.read.parquet('hdfs://myPath/myDB.db/myTable/**')

Then when I do my_df.take(5), it will show [Row(...)], instead of a table format like when we use the pandas data frame.

Is it possible to display the data frame in a table format like pandas data frame? Thanks!

try this: my_df.take(5).show()
I got error: in () ----> my_df.take(5).show() AttributeError: 'list' object has no attribute 'show'
it should be my_df.show().take(5)
@MaxU how is .take(5).show() different from just .show(5)? Is it faster?
my_df.show(5) #5 is the number of line.

e
eddies

The show method does what you're looking for.

For example, given the following dataframe of 3 rows, I can print just the first two rows like this:

df = sqlContext.createDataFrame([("foo", 1), ("bar", 2), ("baz", 3)], ('k', 'v'))
df.show(n=2)

which yields:

+---+---+
|  k|  v|
+---+---+
|foo|  1|
|bar|  2|
+---+---+
only showing top 2 rows

It is v primitive vs pandas: e.g. for wrapping it does not allow horizontal scrolling
Thank you for the answer! But, the link seems to be broken.
Thanks for the heads up. Updated the link to point to the new docs location
L
Louis Yang

As mentioned by @Brent in the comment of @maxymoo's answer, you can try

df.limit(10).toPandas()

to get a prettier table in Jupyter. But this can take some time to run if you are not caching the spark dataframe. Also, .limit() will not keep the order of original spark dataframe.


If you are using toPandas() consider enabling PyArrow optimisations: medium.com/@giorgosmyrianthous/…
G
Giorgos Myrianthous

Let's say we have the following Spark DataFrame:

df = sqlContext.createDataFrame(
    [
        (1, "Mark", "Brown"), 
        (2, "Tom", "Anderson"), 
        (3, "Joshua", "Peterson")
    ], 
    ('id', 'firstName', 'lastName')
)

There are typically three different ways you can use to print the content of the dataframe:

Print Spark DataFrame

The most common way is to use show() function:

>>> df.show()
+---+---------+--------+
| id|firstName|lastName|
+---+---------+--------+
|  1|     Mark|   Brown|
|  2|      Tom|Anderson|
|  3|   Joshua|Peterson|
+---+---------+--------+

Print Spark DataFrame vertically

Say that you have a fairly large number of columns and your dataframe doesn't fit in the screen. You can print the rows vertically - For example, the following command will print the top two rows, vertically, without any truncation.

>>> df.show(n=2, truncate=False, vertical=True)
-RECORD 0-------------
 id        | 1        
 firstName | Mark     
 lastName  | Brown    
-RECORD 1-------------
 id        | 2        
 firstName | Tom      
 lastName  | Anderson 
only showing top 2 rows

Convert to Pandas and print Pandas DataFrame

Alternatively, you can convert your Spark DataFrame into a Pandas DataFrame using .toPandas() and finally print() it.

>>> df_pd = df.toPandas()
>>> print(df_pd)
   id firstName  lastName
0   1      Mark     Brown
1   2       Tom  Anderson
2   3    Joshua  Peterson

Note that this is not recommended when you have to deal with fairly large dataframes, as Pandas needs to load all the data into memory. If this is the case, the following configuration will help when converting a large spark dataframe to a pandas one:

spark.conf.set("spark.sql.execution.arrow.pyspark.enabled", "true")

For more details you can refer to my blog post Speeding up the conversion between PySpark and Pandas DataFrames


m
maxymoo

Yes: call the toPandas method on your dataframe and you'll get an actual pandas dataframe !


I tried to do: my_df.toPandas().head(). But got the error: Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 301 in stage 2.0 failed 1 times, most recent failure: Lost task 301.0 in stage 2.0 (TID 1871, localhost): java.lang.OutOfMemoryError: Java heap space
This is dangerous as this will collect the whole data frame into a single node.
It should be emphasized that this will quickly cap out memory in traditional Spark RDD scenarios.
It should be used with a limit, like this df.limit(10).toPandas() to protect from OOMs
Using .toPandas(), i am getting the following error: An error occurred while calling o86.get. : java.util.NoSuchElementException: spark.sql.execution.pandas.respectSessionTimeZone How do i deal with this?
H
Hubert

If you are using Jupyter, this is what worked for me:

[1] df= spark.read.parquet("s3://df/*")

[2] dsp = users

[3] %%display dsp

This shows well-formated HTML table, you can also draw some simple charts on it straight away. For more documentation of %%display, type %%help.


b
bhargav3vedi

By default show() function prints 20 records of DataFrame. You can define number of rows you want to print by providing argument to show() function. You never know, what will be the total number of rows DataFrame will have. So, we can pass df.count() as argument to show function, which will print all records of DataFrame.

df.show()           --> prints 20 records by default
df.show(30)         --> prints 30 records according to argument
df.show(df.count()) --> get total row count and pass it as argument to show

M
Marc88

Maybe something like this is a tad more elegant:

df.display()
# OR
df.select('column1').display()

display is not a function, PySpark provides functions like head, tail, show to display data frame.
Please re-read the question. The answer very well serves it well.

关注公众号,不定期副业成功案例分享
Follow WeChat

Success story sharing

Want to stay one step ahead of the latest teleworks?

Subscribe Now