ChatGPT解决这个技术问题 Extra ChatGPT

Pyspark replace strings in Spark dataframe column

I'd like to perform some basic stemming on a Spark Dataframe column by replacing substrings. What's the quickest way to do this?

In my current use case, I have a list of addresses that I want to normalize. For example this dataframe:

id     address
1       2 foo lane
2       10 bar lane
3       24 pants ln

Would become

id     address
1       2 foo ln
2       10 bar ln
3       24 pants ln
What's your Spark version?

D
Daniel de Paula

For Spark 1.5 or later, you can use the functions package:

from pyspark.sql.functions import *
newDf = df.withColumn('address', regexp_replace('address', 'lane', 'ln'))

Quick explanation:

The function withColumn is called to add (or replace, if the name exists) a column to the data frame.

The function regexp_replace will generate a new column by replacing all substrings that match the pattern.


Just remember that the first parameter of regexp_replace refers to the column being changed, the second is the regex to find and the last is how to replace it.
can I use regexp_replace inside a pipeline? Thanks
Can we change more than one item in this code?
@elham you can change any value that fits a regexp expression for one column using this function: spark.apache.org/docs/2.2.0/api/R/regexp_replace.html
How does it work for subtracting two string columns within a single dataframe in PySpark?
l
loneStar

For scala

import org.apache.spark.sql.functions.regexp_replace
import org.apache.spark.sql.functions.col
data.withColumn("addr_new", regexp_replace(col("addr_line"), "\\*", ""))