spark-daria

by MrPowers

MrPowers /spark-daria

Essential Spark extensions and helper methods ✨😲

469 Stars 116 Forks Last release: 12 months ago (v0.35.2) MIT License 389 Commits 54 Releases

Available items

No Items, yet!

The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:

spark-daria

Spark helper methods to maximize developer productivity.

CI: Build Status GitHub Build Status

Code quality: Codacy Badge Maintainability

Chat: Gitter

typical daria

Setup

Option 1: Maven

Fetch the JAR file from Maven.

resolvers += "Spark Packages Repo" at "http://dl.bintray.com/spark-packages/maven"

// Scala 2.11 libraryDependencies += "mrpowers" % "spark-daria" % "0.35.0-s_2.11"

// Scala 2.12, Spark 2.4+ libraryDependencies += "mrpowers" % "spark-daria" % "0.35.0-s_2.12"

Option 2: JitPack

Update your

build.sbt
file as follows.
resolvers += "jitpack" at "https://jitpack.io"
libraryDependencies += "com.github.mrpowers" % "spark-daria" % "v0.35.0"

Accessing spark-daria versions for different Spark versions

Different spark-daria versions are compatible with different Spark versions. In general, the latest spark-daria versions are always compatible with the latest Spark versions.

| | 0.35.0 | 0.34.0 | 0.33.2 | |-------|--------------------|--------------------|--------------------| | 2.0.0 | :x: | :x: | :x: | | 2.1.0 | :whitecheckmark: | :whitecheckmark: | :whitecheckmark: | | 2.2.2 | :whitecheckmark: | :whitecheckmark: | :whitecheckmark: | | 2.3.0 | :whitecheckmark: | :whitecheckmark: | :whitecheckmark: | | 2.3.1 | :whitecheckmark: | :whitecheckmark: | :whitecheckmark: | | 2.4.0 | :whitecheckmark: | :whitecheckmark: | :whitecheckmark: |

Email me if you need a custom spark-daria version and I'll help you out :wink:

Writing Beautiful Spark Code

Reading Writing Beautiful Spark Code is the best way to learn how to build Spark projects and leverage spark-daria.

spark-daria will make you a more productive Spark programmer. Studying the spark-daria codebase is a great way to learn how you should structure your own Spark projects.

PySpark

Use quinn to access all these same functions in PySpark.

Usage

spark-daria provides different types of functions that will make your life as a Spark developer easier:

  1. Core extensions
  2. Column functions / UDFs
  3. Custom transformations
  4. Helper methods
  5. DataFrame validators

The following overview will give you an idea of the types of functions that are provided by spark-daria, but you'll need to dig into the docs to learn about all the methods.

Core extensions

The core extensions add methods to existing Spark classes that will help you write beautiful code.

The native Spark API forces you to write code like this.

col("is_nice_person").isNull && col("likes_peanut_butter") === false

When you import the spark-daria

ColumnExt
class, you can write idiomatic Scala code like this:
import com.github.mrpowers.spark.daria.sql.ColumnExt._

col("is_nice_person").isNull && col("likes_peanut_butter").isFalse

This blog post describes how to use the spark-daria

createDF()
method that's much better than the
toDF()
and
createDataFrame()
methods provided by Spark.

See the

ColumnExt
,
DataFrameExt
, and
SparkSessionExt
objects for all the core extensions offered by spark-daria.

Column functions

Column functions can be used in addition to the org.apache.spark.sql.functions.

Here is how to remove all whitespace from a string with the native Spark API:

import org.apache.spark.sql.functions._

regexp_replace(col("first_name"), "\s+", "")

The spark-daria

removeAllWhitespace()
function lets you express this logic with code that's more readable.
import com.github.mrpowers.spark.daria.sql.functions._

removeAllWhitespace(col("first_name"))

Custom transformations

Custom transformations have the following method signature so they can be passed as arguments to the Spark

DataFrame#transform()
method.
def someCustomTransformation(arg1: String)(df: DataFrame): DataFrame = {
  // code that returns a DataFrame
}

The spark-daria

snakeCaseColumns()
custom transformation snake_cases all of the column names in a DataFrame.
import com.github.mrpowers.spark.daria.sql.transformations._

val betterDF = df.transform(snakeCaseColumns())

Protip: You'll always want to deal with snake_case column names in Spark - use this function if your column names contain spaces of uppercase letters.

Helper methods

The DataFrame helper methods make it easy to convert DataFrame columns into Arrays or Maps. Here's how to convert a column to an Array.

import com.github.mrpowers.spark.daria.sql.DataFrameHelpers._

val arr = columnToArrayInt

DataFrame validators

DataFrame validators check that DataFrames contain certain columns or a specific schema. They throw descriptive error messages if the DataFrame schema is not as expected. DataFrame validators are a great way to make sure your application gives descriptive error messages.

Let's look at a method that makes sure a DataFrame contains the expected columns.

val sourceDF = Seq(
  ("jets", "football"),
  ("nacional", "soccer")
).toDF("team", "sport")

val requiredColNames = Seq("team", "sport", "country", "city")

validatePresenceOfColumns(sourceDF, requiredColNames)

// throws this error message: com.github.mrpowers.spark.daria.sql.MissingDataFrameColumnsException: The [country, city] columns are not included in the DataFrame with the following columns [team, sport]

Documentation

Here is the latest spark-daria documentation.

Studying these docs will make you a better Spark developer!

:twowomenholdinghands: :twomenholdinghands: :couple: Contribution Criteria

We are actively looking for contributors to add functionality that fills in the gaps of the Spark source code.

To get started, fork the project and submit a pull request. Please write tests!

After submitting a couple of good pull requests, you'll be added as a contributor to the project.

Continued excellence will be rewarded with push access to the master branch.

Publishing

Build the JAR / POM files with

sbt +spDist
as described in this GitHub issue.

Manually upload the zip files to Spark Packages.

Make a GitHub release so the code is available via JitPack.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.