A Complete Guide to Dropping Columns in PySpark

Introduction

link to this section

Apache PySpark is a powerful Python library for data processing and analysis, especially useful when dealing with big data. One common operation when working with data is dropping unnecessary columns from a DataFrame. This blog post will provide a detailed guide on various ways to drop columns from a PySpark DataFrame using the drop() function.

Datathreads Advertisement - On-Premise ETL,BI, and AI Platform

Creating a Sample DataFrame

link to this section

Before diving into the various ways to drop columns, let's first create a sample PySpark DataFrame:

from pyspark.sql import SparkSession 
    
# Create a Spark session 
spark = SparkSession.builder.appName('Dropping Columns in PySpark').getOrCreate() 

# Create a sample DataFrame 
data = [("John", "Doe", 29, "Male"), 
        ("Jane", "Doe", 34, "Female"), 
        ("Sam", "Smith", 23, "Male")] 
        
columns = ["First Name", "Last Name", "Age", "Gender"] 
df = spark.createDataFrame(data, schema=columns) 
df.show() 

Datathreads Advertisement - On-Premise ETL,BI, and AI Platform

Dropping Columns Using the drop() Function

link to this section

The drop() function in PySpark is a versatile function that can be used in various ways to drop one or more columns from a DataFrame.

Dropping a Single Column

You can drop a single column by passing the column name as a string argument to the drop() function.

df_dropped = df.drop("Age")
df_dropped.show() 

Dropping Multiple Columns

You can drop multiple columns by passing a list of column names as arguments to the drop() function.

df_dropped = df.drop("Age", "Gender") 
df_dropped.show() 

Dropping Columns Using a List

If you have a list of columns that you want to drop, you can pass the list directly to the drop() function.

columns_to_drop = ["Age", "Gender"] 
df_dropped = df.drop(*columns_to_drop) 
df_dropped.show() 

Note : The * operator is used to unpack the list of columns.

Dropping Columns Conditionally

You can use a condition to decide which columns to drop. For example, you can drop all columns that have a certain prefix or suffix.

prefix = "First" 
columns_to_drop = [col for col in df.columns if col.startswith(prefix)] 
df_dropped = df.drop(*columns_to_drop) 
df_dropped.show() 

Dropping Columns with Null Values

You can also drop columns that have a certain percentage of null values.

threshold = 0.5 
columns_with_nulls = [col for col in df.columns if df.filter(df[col].isNull()).count() / df.count() > threshold] 
df_dropped = df.drop(*columns_with_nulls) 
df_dropped.show() 

Note : In the example above, the threshold is set to 0.5, which means that any column with more than 50% null values will be dropped.

Dropping Columns with Low Variance

Columns with low variance may not be very informative for certain analyses. You can drop columns with a variance below a certain threshold.

from pyspark.sql.functions import var_samp 
    
threshold = 0.1 
columns_with_low_variance = [col for col in df.columns if df.select(var_samp(df[col])).collect()[0][0] < threshold] 
df_dropped = df.drop(*columns_with_low_variance) 
df_dropped.show() 

Note : In the example above, the threshold is set to 0.1, which means that any column with a variance below 0.1 will be dropped.

Datathreads Advertisement - On-Premise ETL,BI, and AI Platform

Conclusion

link to this section

In this blog post, we have explored various ways to use the drop() function in PySpark to drop columns from a DataFrame. This includes dropping a single column, multiple columns, columns using a list, columns conditionally, columns with null values, and columns with low variance. Understanding how to use the drop() function effectively is crucial for data cleaning and preparation in PySpark. We hope this comprehensive guide helps you in your data cleaning and processing tasks using PySpark. Happy coding!