How to Use the CASE Statement for Conditional Operations in PySpark or Spark

The CASE statement in Spark allows us to specify conditional logic that can be used to transform data. It is a useful tool for data preprocessing, cleaning, and transformation. In this blog post, we will explore how to use the CASE statement in Spark.

The CASE statement is similar to the switch statement in other programming languages. It allows us to evaluate an expression and return a result based on the result of that evaluation. The CASE statement has the following syntax:

case when {condition} then {value} 
    [when {condition} then {value}] 
    [else {value}] 
end 

The CASE statement evaluates each condition in order and returns the value of the first condition that is true. If none of the conditions are true, it returns the value of the ELSE clause (if specified) or NULL.

Datathreads Advertisement - On-Premise ETL,BI, and AI Platform

Let's take a look at an example of how to use the CASE statement in Spark:

import org.apache.spark.sql.functions.{col, when} 
        
val df = Seq(("Alice", 25), 
    ("Bob", 30), ("Charlie", 35))
    .toDF("Name", "Age") 
    
val transformedDF = df.withColumn("Age Group", 
    when(col("Age") < 30, "Under 30") 
    .when(col("Age") >= 30 && col("Age") 
    <= 40, "Between 30 and 40") 
    .otherwise("Over 40")) 

transformedDF.show() 

In this example, we have a DataFrame with two columns: Name and Age. We want to create a new column called Age Group, which will categorize people based on their age. We use the when() function to specify the conditions and the values we want to return.

The when() function takes two arguments: the first argument is the condition, and the second argument is the value to return if the condition is true. We can chain multiple when() functions together to evaluate multiple conditions.

In this example, we use the col() function to reference the Age column in our DataFrame. We then specify three conditions: people under 30, people between 30 and 40, and people over 40. Finally, we use the otherwise() function to specify the value to return if none of the conditions are true.

The resulting DataFrame will have a new column called Age Group, which categorizes people based on their age. We can use this technique to perform a wide range of data transformations in Spark, including data cleaning, feature engineering, and more.

Datathreads Advertisement - On-Premise ETL,BI, and AI Platform

Conclusion

In conclusion, the CASE statement is a powerful tool for data transformation in Spark. By understanding how to use it in conjunction with other Spark functions and APIs, we can build complex data processing pipelines that can handle a wide variety of use cases.