Need help with RemixAutoML?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

AdrianAntico
172 Stars 37 Forks Mozilla Public License 2.0 4.9K Commits 1 Opened issues

Description

R package for automation of machine learning, forecasting, feature engineering, model evaluation, model interpretation, recommenders, and EDA.

Services available

!
?

Need anything else?

Contributors list

Version: 0.6.0 Build: Passing License: MPL 2.0 Maintenance PRs Welcome GitHub Stars

Check out the RemixAutoML website or reference manual

RemixAutoML Webpage

RemixAutoML Reference Manual

Table of Contents - Background - Highlights - Installation - RemixAutoML Blogs

Documentation + Code Examples - Feature Engineering - Supervised Learning - Model Scoring - Model Evaluation - Panel Data Forecasting - Intermittent Demand Forecasting - Funnel Forecasting - Time Series Forecasting - Recommender Systems - Database Management - Utilities EDA and Misc

Background

Expand to view content

> Automated Machine Learning - In my view, AutoML should consist of functions to help make professional model development and operationalization more efficient. The functions in this package are there to help no matter which part of the ML lifecycle you are working on. The functions in this package have been tested across a variety of industries and have consistently outperformed competing methods. RemixAutoML has enabled me to deliver exceptionally high quality solutions in minimal time and as a result I can focus more time on the other aspects of projects that are critical for generating a high ROI and high adoption rate.

Package Details

> Supervised Learning - Currently, I'm utilizing CatBoost, LightGBM, XGBoost, and H2O for all of the automated Machine Learning related functions. GPU's can be utilized with CatBoost, LightGBM, and XGBoost, while those and the H2O models can all utilize 100% of CPU. Multi-armed bandit grid tuning is available for CatBoost, LightGBM, and XGBoost models, which utilize the concept of randomized probability matching, which is detailed in the R pacakge "bandit". My choice of included ML algorithms in the package is based on previous success when compared against other algorithms on real world use cases, the additional utilities these packages offer aside from accurate predictions, their ability to work on big data, and the fact that they're available in both R and Python which makes managing multiple languages a little more seamless in a professional setting.

> Forecasting - Automated functions for single series, panel data, vector autoregression, intermittent demand, and funnel panel data. The panel data models utilize the machine learning algos from above and the feature engineering functions below. They are extremely feature rich and the combination of all possible feature settings is huge. The models for individual series are fully optimized versions from the R package "forecast". I utilize the multi-armed bandit grid tuning algo used in the supervised learning models and apply it to the SARIMA and NNETAR models from the forecast package. I also measure performance on hold out data (and training data, or a blend of the two).

> Feature Engineering - Some of the feature engineering functions can only be found in this package. I believe feature engineering is your best bet for improving model performance. I have functions that cover all feature types except image data. There are feature engineering functions for numeric data, categorical data, text data, and date data. They are all designed to generate features for training and scoring pipelines and they run extremely fast with low memory utilization. The package takes advantage of data.table for all feature engineering and data wrangling related functions which means I only have to go to big data tools if absolutely necessary.

> Documentation - Each exported function in the package has a help file and can be viewed in your RStudio session, e.g. ?RemixAutoML::ModelDataPrep. Many of them come with examples coded up in the help files (at the bottom) that you can run to get a feel for how to set the parameters. There's also a listing of exported functions by category with code examples at the bottom of this readme. You can also jump into the R folder here to dig into the source code.

> Overall process: Typically, I go to the warehouse to get all of my base features and then I run through all the relevant feature engineering functions in this package. Personally, I set up templates for features engineering, model training optimization, and model scoring (including feature engineering for scoring). I collect all relevant metdata in a list that is shared across templates and as a result, I never have to touch the model scoring template, which makes operationalize and maintenace a breeze. I can simply list out the columns of interest, which feature engineering functions I want to utilize, and then I simply kick off some command line scripts and everything else is automatically managed.

Highlights

Model Insights Reports for Supervised Learning Functions

Machine Learning and Panel Forecasting Models

Feature Engineering

Installation

The Description File is designed to require only the minimum number of packages to install RemixAutoML. However, in order to utilize most of the functions in the package, you'll have to install additional libraries. I set it up this way on purpose. You don't need to install every single possible dependency if you are only interested in using a few of the functions. For example, if you only want to use CatBoost then intall the catboost package and forget about the h2o, xgboost, and lightgbm packages. This is one of the primary benefits of not hosting an R package on cran, as they require dependencies to be part of the Imports section on the Description File, which subsequently requires users to have all dependencies installed in order to install the package.

The minimal set of packages that need to be installed are below. The full list can be found by expanding the section (Expand to view content). * arules * bit64 * data.table * doParallel * foreach * lubridate * timeDate

Expand to view content

Install ALL R package dependencies for all functions:

XGBoost and LightGBM can be used with GPU. However, their installation is much more involved than CatBoost, which comes with GPU capabilities simply by installing their package. The installation instructions for them below is for the CPU version only. Refer to each's home page for instructions for installing for GPU.

# Install Dependencies----
if(!("devtools" %in% rownames(installed.packages()))) install.packages("devtools"); print("devtools")
if(!("arules" %in% rownames(installed.packages()))) install.packages("arules"); print("arules")
if(!("bit64" %in% rownames(installed.packages()))) install.packages("bit64"); print("bit64")
if(!("combinat" %in% rownames(install.packages()))) install.packages("combinat"); print("combinat")
if(!("data.table" %in% rownames(installed.packages()))) install.packages("data.table"); print("data.table")
if(!("doParallel" %in% rownames(installed.packages()))) install.packages("doParallel"); print("doParallel")
if(!("e1071" %in% rownames(installed.packages()))) install.packages("e1071"); print("e1071")
if(!("fBasics" %in% rownames(installed.packages()))) install.packages("fBasics"); print("fBasics")
if(!("foreach" %in% rownames(installed.packages()))) install.packages("foreach"); print("foreach")
if(!("forecast" %in% rownames(installed.packages()))) install.packages("forecast"); print("forecast")
if(!("fpp" %in% rownames(installed.packages()))) install.packages("fpp"); print("fpp")
if(!("ggplot2" %in% rownames(installed.packages()))) install.packages("ggplot2"); print("ggplot2")
if(!("gridExtra" %in% rownames(installed.packages()))) install.packages("gridExtra"); print("gridExtra")
if(!("itertools" %in% rownames(installed.packages()))) install.packages("itertools"); print("itertools")
if(!("lubridate" %in% rownames(installed.packages()))) install.packages("lubridate"); print("lubridate")
if(!("MLmetrics" %in% rownames(installed.packages()))) install.packages("MLmetrics"); print("MLmetrics")
if(!("nortest" %in% rownames(installed.packages()))) install.packages("nortest"); print("nortest")
if(!("RColorBrewer" %in% rownames(installed.packages()))) install.packages("RColorBrewer"); print("RColorBrewer")
if(!("recommenderlab" %in% rownames(installed.packages()))) install.packages("recommenderlab"); print("recommenderlab")
if(!("pROC" %in% rownames(installed.packages()))) install.packages("pROC"); print("pROC")
if(!("Rfast" %in% rownames(installed.packages()))) install.packages("Rfast"); print("Rfast")
if(!("scatterplot3d" %in% rownames(installed.packages()))) install.packages("scatterplot3d"); print("scatterplot3d")
if(!("stringr" %in% rownames(installed.packages()))) install.packages("stringr"); print("stringr")
if(!("timeDate" %in% rownames(installed.packages()))) install.packages("timeDate"); print("timeDate")
if(!("tsoutliers" %in% rownames(installed.packages()))) install.packages("tsoutliers"); print("tsoutliers")
if(!("xgboost" %in% rownames(installed.packages()))) install.packages("xgboost"); print("xgboost")
if(!("lightgbm" %in% rownames(installed.packages()))) install.packages("lightgbm"); print("lightgbm")
for (pkg in c("RCurl","jsonlite")) if (! (pkg %in% rownames(installed.packages()))) { install.packages(pkg) }
install.packages("h2o", type = "source", repos = (c("http://h2o-release.s3.amazonaws.com/h2o/latest_stable_R")))
devtools::install_github('catboost/catboost', subdir = 'catboost/R-package')
devtools::install_github('AdrianAntico/RemixAutoML', upgrade = FALSE, dependencies = FALSE, force = TRUE)

Installation Troubleshooting

The most common issue some users are having when trying to install RemixAutoML is the installation of the catboost package dependency. Since catboost is not on CRAN it can only be installed through GitHub. To install catboost without error (and consequently install RemixAutoML without error), try running this line of code first, then restart your R session, then re-run the 2-step installation process above. (Reference): If you're still having trouble submit an issue and I'll work with you to get it installed.

# Method for on premise servers
options(devtools.install.args = c("--no-multiarch", "--no-test-load"))
install.packages("https://github.com/catboost/catboost/releases/download/v0.26.1/catboost-R-Windows-0.26.1.tgz", repos = NULL, type = "source", INSTALL_opts = c("--no-multiarch", "--no-test-load"))

# Method for azure machine learning Designer pipelines

## catboost
install.packages("https://github.com/catboost/catboost/releases/download/v0.26.1/catboost-R-Windows-0.26.1.tgz", repos = NULL, type = "source", INSTALL_opts = c("--no-multiarch", "--no-test-load"))

## RemixAutoML
install.packages("https://github.com/AdrianAntico/RemixAutoML/archive/refs/tags/V0.5.9.tar.gz", repos = NULL, type = "source", INSTALL_opts = c("--no-multiarch", "--no-test-load"))

If you're having still having trouble installing see if the issue below helps out:

Issue #19

RemixAutoML Blogs

Expand to view content

Sales Funnel Forecasting with ML using RemixAutoML

The Most Feature Rich ML Forecasting Methods Available

AutoML Frameworks in R & Python

AI for Small to Medium Size Businesses: A Management Take On The Challenges...

Why Machine Learning is more Practical than Econometrics in the Real World

Build Thousands of Automated Demand Forecasts in 15 Minutes Using AutoCatBoostCARMA in R

Automate Your KPI Forecasts With Only 1 Line of R Code Using AutoTS

Feature Engineering

Expand to view content

AutoLagRollStats() and AutoLagRollStatsScoring()

Code Example

```r

Create fake Panel Data----

Count

Code Example

```r

Create fake Panel Data----

Count

Function Description

AutoLagRollStats() builds lags and rolling statistics by grouping variables and their interactions along with multiple different time aggregations if selected. Rolling stats include mean, sd, skewness, kurtosis, and the 5th - 95th percentiles. This function was inspired by the distributed lag modeling framework but I wanted to use it for time series analysis as well and really generalize it as much as possible. The beauty of this function is inspired by analyzing whether a baseball player will get a basehit or more in his next at bat. One easy way to get a better idea of the likelihood is to look at his batting average and his career batting average. However, players go into hot streaks and slumps. How do we account for that? Well, in comes the functions here. You look at the batting average over the last N to N+x at bats, for various N and x. I keep going though - I want the same windows for calculating the players standard deviation, skewness, kurtosis, and various quantiles over those time windows. I also want to look at all those measure but by using weekly data - as in, over the last N weeks, pull in those stats too.

AutoLagRollStatsScoring() builds the above features for a partial set of records in a data set. The function is extremely useful as it can compute these feature vectors at a significantly faster rate than the non scoring version which comes in handy for scoring ML models. If you can find a way to make it faster, let me know.

AutoLagRollMode()

Code Example

```r

NO GROUPING CASE: Create fake Panel Data----

Count

Function Description

AutoLagRollMode() Generate lags and rolling modes for categorical variables

AutoDiffLagN()

Code Example

```r ##############################

Current minus lag1

##############################

Create fake data

data

Function Description

AutoDiffLagN() Generate differences for numeric columns, date columns, and categorical columns, by groups. You can specify NLag1 and NLag2 to generate the diffs based on any two time periods.

AutoInteraction()

Code Example

```r #########################################

Feature Engineering for Model Training

#########################################

Create fake data

data

Function Description

AutoInteraction() will build out any number of interactions you want for numeric variables. You supply a character vector of numeric or integer column names, along with the names of any numeric columns you want to skip (including the interaction column names) and the interactions will be automatically created for you. For example, if you want a 4th degree interaction from 10 numeric columns, you will have 10 C 2, 10 C 3, and 10 C 4 columns created. Now, let's say you build all those features and decide you don't want all 10 features to be included. Remove the feature name from the NumericVars character vector. Now, let's say you modeled all of the interaction features and want to remove the ones will the lowest scores on the variable importance list. Grab the names and run the interaction function again except this time supply those poor performing interaction column names to the SkipCols argument and they will be ignored. Now, if you want to interact any categorical variable with a numeric variable, you'll have to dummify the categorical variable first and then include the level specific dummy variable column names to the NumericVars character vector argument. If you set Center and Scale to TRUE then the interaction multiplication won't create huge numbers.

AutoWord2VecModeler() and AutoWord2VecScoring()

Code Example

```r

Create fake data

data

Function Description

AutoWord2VecModeler() generates a specified number of vectors (word2vec) for each column of text data in your data set that you specify and it will save the models if you specify for re-creating them later in a model scoring process. You can choose to build individual models for each column or one model for all your columns. If you need to run several models for groups of text variables you can run the function several times.

AutoWord2VecScoring() this is for generating word2vec vectors for model scoring situations. The function will load the model, create the transformations, and merge them onto the source data.table just like the training version does.

CategoricalEncoding()

Code Example

```r

Create fake data with 10 categorical

data

Function Description

CategoricalEncoding() enables you to convert your categorical variables into numeric variables in seven different ways. You can choose from m_estimator, credibility (a.k.a. James Stein), weight of evidence, target encoding, poly encoding, backward difference encoding, and helmert encoding. You can run the function for training data and for scoring situations (on demand or batch). For scoring, you can choose to supply an imputation value for new levels that may show up or you can manage them somewhere else in the pipeline. For scoring, you have two options: during the training run you can save the metadata to file by supplying a path to SavePath or you can have the metadata returned by setting ReturnFactorLevelList to TRUE and in scoring your can either have the files pulled from file using the SavePath argument and the function will take care of the rest or you can supply the ReturnFactorLevelList to the SupplyFactorLevelList argument and the function will take care of the rest.

H2OAutoencoder() and H2OAutoencoderScoring()

Code Example

```r ############################

Training

############################

Create simulated data

data

Function Description

H2OAutoencoder() Use for dimension reduction and anomaly detection

H2OAutoencoderScoring() Use for dimension reduction and anomaly detection scoring

H2OIsolationForest() and H2OIsolationForestScoring()

Code Example

```r

Create simulated data

data

Function Description

H2OIsolationForecast() Anomaly detection and feature engineering using H2O Isolation Forest. A model is built, your training data is scored, and the model is saved to file for later use in scoring environments with H2OIsolationForestScoring()

H2OIsolationForecastScoring() Scoring function

AutoClustering() and AutoClusteringScoring()

Code Example

```r #########################

Training Setup

#########################

Create fake data

data

Function Description

AutoClustering() Generates a single column and merges it onto your data. You can have an autoencoder ran to reduce the dimension size before running the KMeans grid tuning operation. If you provide a directory path, the models will be saved and can be used later in scoring enviroments. I find that I utilize clustering more often for feature engineering that unsupervised learning which is why I put the code example and description here. The function utilizes H2O under the hood with their KMeans algo for the clustering and their deep learning algo for the dimensionality reduction.

CreateCalendarVariables()

Code Example

```r

Create fake data with a Date column----

data

Function Description

CreateCalendarVariables() This functions creates numerical columns based on the date columns you supply such as second, minute, hour, week day, day of month, day of year, week, isoweek, wom, month, quarter, and year.

CreateHolidayVariable()

Code Example

```r

Create fake data with a Date----

data

Function Description

CreateHolidayVariable() This function counts up the number of specified holidays between the current record time stamp and the previous record time stamp, by group as well if specified.

DummifyDT()

Code Example

```r

Create fake data with 10 categorical columns

data

Function Description

DummifyDT() This function is used in the AutoXGBoost__() suite of modeling functions to manage categorical variables in your training, validation, and test sets. This function rapidly dichotomizes categorical columns in a data.table (N+1 columns for N levels using one hot encoding or N columns for N levels otherwise). Several other arguments exist for outputting and saving factor levels. This is useful in model training, validating, and scoring processes.

AutoDataPartition()

Code Example

```r

Create fake data

data

Function Description

AutoDataPartition() is designed to achieve a few things that standard data partitioning processes or functions don't handle. First, you can choose to build any number of partitioned data sets beyond the standard train, validate, and test data sets. Second, you can choose between random sampling to split your data or you can choose a time-based partitioning. Third, for the random partitioning, you can specify a stratification columns in your data to stratify by in order to ensure a proper split amongst your categorical features (E.g. think MultiClass targets). Lastly, it's 100% data.table so it will run fast and with low memory overhead.

ModelDataPrep()

Code Example

```r

Create fake data

data

Function Description

ModelDataPrep() This function will loop through every column in your data and apply a variety of functions based on argument settings. For all columns not ignored, these tasks include:

  • Character type to Factor type converstion
  • Factor type to Character type conversion
  • Constant value imputation for numeric and categorical columns
  • Integer type to Numeric type conversion
  • Date type to Character type conversion
  • Remove date columns
  • Ignore specified columns

AutoTransformationCreate() and AutoTransformationScore()

Function Description

AutoTransformationCreate() is a function for automatically identifying the optimal transformations for numeric features and transforming them once identified. This function will loop through your selected transformation options (YeoJohnson, BoxCox, Asinh, Log, LogPlus1, Sqrt, along with Asin and Logit for proportion data) and find the one that produces the best fit to a normal distribution. It then generates the transformation and collects the metadata information for use in the AutoTransformationScore() function, either by returning the objects or saving them to file.

AutoTransformationScore() is a the compliment function to AutoTransformationCreate(). Automatically apply or inverse the transformations you identified in AutoTransformationCreate() to other data sets. This is useful for applying transformations to your validation and test data sets for modeling, which is done automatically for you if you specify.

AutoHierarchicalFourier()

Function Description

AutoHierarchicalFourier() turns time series data into fourier series. This function can generate any number of fourier pairs the user wants (if they can actually build) and you can run it with grouped time series data. In the grouping case, fourier pairs can be created for each categorical variable along with the full interactions between specified categoricals. The process is parallelized as well to run as fast as possible.

Supervised Learning

Expand to view content

Regression

click to expand

Regression Description

The Auto_Regression() models handle a multitude of tasks. In order:

  1. Convert your data to data.table format for faster processing
  2. Transform your target variable using the best normalization method based on the AutoTransformationCreate() function
  3. Create train, validation, and test data, utilizing the AutoDataPartition() function, if you didn't supply those directly to the function
  4. Consoldate columns that are used for modeling and what metadata you want returned in your test data with predictions
  5. Dichotomize categorical variables (for AutoXGBoostRegression()) and save the factor levels for scoring in a way that guarentees consistency across training, validation, and test data sets, utilizing the DummifyDT() function
  6. Save the final modeling column names for reference
  7. Handles the data conversion to the appropriate modeling type, such as CatBoost, H2O, and XGBoost
  8. Multi-armed bandit hyperparameter tuning using randomized probability matching, if you choose to grid tune
  9. Loop through the grid-tuning process, building N models
  10. Collect the evaluation metrics for each grid tune run
  11. Identify the best model of the set of models built in the grid tuning search
  12. Save the hyperparameters from the winning grid tuned model
  13. Build the final model based on the best model from the grid tuning model search (I remove each model after evaluation metrics are generated in the grid tune to avoid memory overflow)
  14. Back-transform your predictions based on the best transformation used earlier in the process
  15. Collect evaluation metrics based on performance on test data (based on back-transformed data)
  16. Store the final predictions with the associated test data and other columns you want included in that set
  17. Save your transformation metadata for recreating them in a scoring process
  18. Build out and save an Evaluation Calibration Line Plot and Evaluation Calibration Box-Plot, using the EvalPlot() function
  19. Generate and save Variable Importance
  20. Generate and save Partital Dependence Calibration Line Plots and Partital Dependence Calibration Box-Plots, using the ParDepPlots() function
  21. Return all the objects generated in a named list for immediate use and evaluation

CatBoost Example

```r

Create some dummy correlated data

data

XGBoost Example

```r

Create some dummy correlated data

data

LightGBM Example

```r

Create some dummy correlated data

data

H2O-GBM Example

```r

Create some dummy correlated data

data

H2O-DRF Example

```r

Create some dummy correlated data

data

H2O-GLM Example

```r

Create some dummy correlated data

data

H2O-AutoML Example

```r

Create some dummy correlated data with numeric and categorical features

data

H2O-GAM Example

```r

Create some dummy correlated data

data

Binary Classification

click to expand

Classification Description

The Auto_Classifier() models handle a multitude of tasks. In order:

  1. Convert your data to data.table format for faster processing
  2. Create train, validation, and test data if you didn't supply those directly to the function
  3. Consoldate columns that are used for modeling and what is to be kept for data returned
  4. Dichotomize categorical variables (for AutoXGBoostRegression) and save the factor levels for scoring in a way that guarentees consistency across training, validation, and test data sets
  5. Saves the final column names for modeling to a csv for later reference
  6. Handles the data conversion to the appropriate type, based on model type (CatBoost, H2O, and XGBoost)
  7. Multi-armed bandit hyperparameter tuning using randomized probability matching, if you choose to grid tune
  8. Build the grid tuned models
  9. Collect the evaluation metrics for each grid tune run
  10. Identify the best model of the set of models built in the grid tuning setup
  11. Save the hyperparameters from the winning grid tuned model
  12. Build the final model based on the best model from the grid tuning model search
  13. Collect evaluation metrics based on performance on test data
  14. Store the final predictions with the associated test data and other columns you want included in that set
  15. Build out and save an Evaluation Calibration Line Plot
  16. Build out and save an ROC plot with the top 5 models used in grid-tuning (includes the winning model)
  17. Generate and save Variable Importance data
  18. Generate and save Partital Dependence Calibration Line Plots
  19. Return all the objects generated in a named list for immediate use

CatBoost Example

```r

Create some dummy correlated data

data

XGBoost Example

```r

Create some dummy correlated data

data

LightGBM Example

```r

Create some dummy correlated data

data

H2O-GBM Example

```r

Create some dummy correlated data

data

H2O-DRF Example

```r

Create some dummy correlated data

data

H2O-GLM Example

```r

Create some dummy correlated data with numeric and categorical features

data

H2O-AutoML Example

```r

Create some dummy correlated data with numeric and categorical features

data

H2O-GAM Example

```r

Create some dummy correlated data

data

MultiClass Classification

click to expand

MultiClass Description

The Auto_MultiClass() models handle a multitude of tasks. In order:

  1. Convert your data to data.table format for faster processing
  2. Create train, validation, and test data if you didn't supply those directly to the function
  3. Consoldate columns that are used for modeling and what is to be kept for data returned
  4. Dichotomize categorical variables (for AutoXGBoostRegression) and save the factor levels for scoring in a way that guarentees consistency across training, validation, and test data sets
  5. Saves the final column names for modeling to a csv for later reference
  6. Ensures the target levels are consistent across train, validate, and test sets and save the levels to file
  7. Handles the data conversion to the appropriate type, based on model type (CatBoost, H2O, and XGBoost)
  8. Multi-armed bandit hyperparameter tuning using randomized probability matching, if you choose to grid tune
  9. Build the grid tuned models
  10. Collect the evaluation metrics for each grid tune run
  11. Identify the best model of the set of models built in the grid tuning setup
  12. Save the hyperparameters from the winning grid tuned model
  13. Build the final model based on the best model from the grid tuning model search
  14. Collect evaluation metrics based on performance on test data
  15. Store the final predictions with the associated test data and other columns you want included in that set
  16. Generate and save Variable Importance data
  17. Return all the objects generated in a named list for immediate use

CatBoost Example

```r

Create some dummy correlated data

data

XGBoost Example

```r

Create some dummy correlated data

data

LightGBM Example

```r

Create some dummy correlated data

data

H2O-GBM Example

```r

Create some dummy correlated data

data

H2O-DRF Example

```r

Create some dummy correlated data

data

H2O-GLM Example

```r

Create some dummy correlated data with numeric and categorical features

data

H2O-AutoML Example

```r

Create some dummy correlated data with numeric and categorical features

data

H2O-GAM Example

```r

Create some dummy correlated data with numeric and categorical features

data

Generalized Hurdle Models

click to expand

Hurdle Model Description

First step is to build either a binary classification model (in the case of a single bucket value, such as zero) or a multiclass model (for the case of multiple bucket values, such as zero and 10). The next step is to subset the data for the cases of: less than the first split value, in between the first and second split value, second and third split value, ..., second to last and last split value, along with greater than last split value. For each data subset, a regression model is built for predicting values in the split value ranges. The final compilation is to multiply the probabilities of being in each group times the values supplied by the regression values for each group.

Single Partition

  • E(y|xi) = Pr(X = 0) * 0 + Pr(X > 0) * E(X | X >= 0)
  • E(y|xi) = Pr(X < x1) * E(X | X < x1) + Pr(X >= x1) * E(X | X >= x1)

Multiple Partitions

  • E(y|xi) = Pr(X = 0) * 0 + Pr(X < x2) * E(X | X < x2) + ... + Pr(X < xn) * E(X | X < xn) + Pr(X >= xn) * E(X | X >= xn)
  • E(y|xi) = Pr(X < x1) * E(X | X < x1) + Pr(x1 <= X < x2) * E(X | x1 <= X < x2) + ... + Pr(xn-1 <= X < xn) * E(X | xn-1 <= X < xn) + Pr(X >= xn) * E(X | X >= xn)

AutoCatBoostHurdleModel() utilizes the CatBoost algorithm on the backend.

AutoXGBoostHurdleModel() utilizes the XGBoost algorithm on the backend.

AutoLightGBMHurdleModel() utilizes the H2O distributed random forest algorithm on the backend.

CatBoost Example

```r

Test data.table

CatBoost_QA

XGBoost Example

```r

Test data.table

XGBoost_QA

LightGBM Example

```r

Test data.table

LightGBM_QA

Model Scoring

Expand to view content

Scoring Description

AutoCatBoostScoring() is an automated scoring function that compliments the AutoCatBoost__() model training functions. This function requires you to supply features for scoring. It will run ModelDataPrep() to prepare your features for catboost data conversion and scoring. It will also handle and transformations and back-transformations if you utilized that feature in the regression training case.

AutoXGBoostScoring() is an automated scoring function that compliments the AutoXGBoost__() model training functions. This function requires you to supply features for scoring. It will run ModelDataPrep() and the CategoricalEncoding() functions to prepare your features for xgboost data conversion and scoring. It will also handle and transformations and back-transformations if you utilized that feature in the regression training case.

AutoLightGBMScoring() is an automated scoring function that compliments the AutoLightGBM__() model training functions. This function requires you to supply features for scoring. It will run ModelDataPrep() and the CategoricalEncoding() functions to prepare your features for lightgbm data conversion and scoring. It will also handle and transformations and back-transformations if you utilized that feature in the regression training case.

AutoH2OMLScoring() is an automated scoring function that compliments the AutoH2oGBM__() and AutoH2oDRF__() model training functions. This function requires you to supply features for scoring. It will run ModelDataPrep()to prepare your features for H2O data conversion and scoring. It will also handle transformations and back-transformations if you utilized that feature in the regression training case and didn't do it yourself before hand.

AutoCatBoostHurdleModelScoring() for scoring models developed with AutoCatBoostHurdleModel()

AutoLightGBMHurdleModelScoring() for scoring models developed with AutoLightGBMHurdleModel()

AutoXGBoostHurdleModelScoring() for scoring models developed with AutoXGBoostHurdleModel()

AutoCatBoost__() Examples

AutoCatBoostRegression() Scoring Example

```r

Create some dummy correlated data

data

AutoCatBoostClassifier() Scoring Example

```r

Refresh data

data

AutoCatBoostMultiClasss() Scoring Example

```r

Refresh data

data

AutoCatBoostHurdleModel() Scoring Example

```r

Classify or Multiclass example

Classify

AutoLightGBM__() Examples

AutoLightGBMRegression() Scoring Example

```r

Refresh data

data

AutoLightGBMClassifier() Scoring Example

```r

Refresh data

data

AutoLightGBMMultiClasss() Scoring Example

```r

Refresh data

data

AutoLightGBMHurdleModel() Scoring Example

```r

Classify

Classify

AutoXGBoost__() Examples

AutoXGBoostRegression() Scoring Example

```r

Refresh data

data

AutoXGBoostClassifier() Scoring Example

```r

Refresh data

data

AutoXGBoostMultiClasss() Scoring Example

```r

Refresh data

data

AutoXGBoostHurdleModel() Scoring Example

```r

Classify

Classify

Model Evaluation

Expand to view content

Regression ModelInsightsReport() Example

```r

Create some dummy correlated data

data

Classification ModelInsightsReport() Example

```r

Create some dummy correlated data

data

MultiClass ModelInsightsReport() Example

```r

Create some dummy correlated data

data

AutoShapeShap() Example

```r ShapTable

SingleRowShapeShap() Example

```r ShapTable

Functions Description

AutoShapeShap() will take your CatBoost or XGBoost TestData or TrainData returned from one of the supervised learning functions and build out a table. The table will have columns, 'Date', 'EntityID', 'Variable', 'AbsShapValue', 'ShapValue', 'CurrentValue', 'DiffValue', 'PreviousValue' which can be used to analyze variable importance for any slice of the data. A few interesting areas to investigate include varible importance by records associated with a classifier label of 1 vs 0, for different grouping levels, and for different time slices. Further, if you analyze the ShapeValue column you will get a directional variable importance instead of a magnitute importance. You can further analyze the data by looking at standard deviation of ShapValues or any other aggregations method of interest. The function runs row by row so I set it up to run in parallel. You can specify the number of cores to utilize in case you have other processes running on your computer. The function calls the SingleRowShapeShap() function repeatedly. If you want to return information from the Shap Table for a single record you can run SingleRowShapeShap().

SingleRowShapeShap() Generate a table of shap values, current value, and diff values if diff variables are present, for a single predicted value.

RemixClassificationMetrics() will return all confusion matrix metrics across all possible thresholds (seq(0.01,0.99,0.01) for any Remix Auto_Classification() model. Cost sensitive thresholds are also returned.

ParDepCalPlots() is for visualizing the relationships of features and the reliability of the model in predicting those effects. Build a partial dependence calibration line plot, box plot or bar plot for the case of categorical variables. For numeric variables, if you have a date variable you can supply the name of that column and a time aggregation to return a heatmap as well, which captures the relationship over time.

EvalPlot() Has two plot versions: calibration line plot of predicted values and actual values across range of predicted value, and calibration boxplot for seeing the accuracy and variability of predictions against actuals.

ROCPlot() For classification evaluation

CumGainsChart() Create cumulative gains and lift plots

ResidualPlots() Plots to evaluate regression residuals

VI_Plot() For creating variable importance plots

threshOptim() is great for situations with asymmetric costs across the confusion matrix. Generate a cost-sensitive optimized threshold for classification models. Just supply the costs for false positives and false negatives (can supply costs for all four outcomes too) and the function will return the optimal threshold for maximizing "utility".

RedYellowGreen() computes optimal thresholds for binary classification models where "don't classify" is an option. Consider a health care binary classification model that predicts whether or not a disease is present. This is certainly a case for threshOptim since the costs of false positives and false negatives can vary by a large margin. However, there is always the potential to run further analysis. The RedYellowGreen() function can compute two thresholds if you can supply a cost of "further analysis". Predicted values < the lower threshold are confidently classified as a negative case and predicted values > the upper threshold are confidently classified as a postive case. Predicted values in between the lower and upper thresholds are cases that should require further analysis.

Panel Data Forecasting

Expand to view content

Panel Models Description

The CARMA suite utilizes several features to ensure proper models are built to generate the best possible out-of-sample forecasts.

Feature engineering: I use a time trend, calendar variables, holiday counts, differencing, along with fourier pairs and lags / rolling statistics (mean, sd, skewness, kurtosis, quantiles) and they can be generated by categorical variables and their interactions plus for various time-based aggregations. Internally, the CARMA functions utilize several RemixAutoML functions, all written using data.table for fast and memory efficient processing: 

  • AutoLagRollStats() - creates lags and moving average features (also creates lags and moving averages off of time between records)
  • AutoLagRollStatsScoring() - creates lags and moving average features for a single record (along with the time between vars)
  • CreateCalendarVariables() - creates numeric features identifying various time units based on date columns
  • CreateHolidayVariables() - creates count features based on the specified holiday groups you want to track and the date columns you supply
  • AutoHierarchicalFourier() - creates fourier pairs, by group, in paralell, for group variables and their interactions
  • DifferenceData() - differencing of the target variable for single series and panel data

Optimal transformations: the target variable along with the associated lags and moving average features were transformed. This is really useful for regression models with categorical features that have associated target values that significantly differ from each other. The transformation options that are tested (using a Pearson test for normality) include: 

  • YeoJohnson
  • BoxCox
  • Log
  • LogPlus1
  • Sqrt
  • arcsinh
  • Identity
  • arcsin(sqrt(x)): proportion data only
  • logit(x): proportion data only

The functions used to create these and generate them for scoring models come from RemixAutoML:

  • AutoTransformationCreate()
  • AutoTransformationScore()

Models: there are eight CARMA functions and each use a different algorithm for the model fitting. The models used to fit the time series data come from RemixAutoML and include: 

  • AutoCatBoostRegression()
  • AutoXGBoostRegression()
  • AutoLightGBMRegression()
  • AutoH2oDRFRegression()
  • AutoH2oGBMRegression()
  • AutoH2oGLMRegression()
  • AutoH2oGAMRegression()
  • AutoH2oAutoMLRegression()

GPU: With the CatBoost and XGBoost functions, you can build the models utilizing GPU (I run them with a GeForce 1080ti) which results in an average 10x speedup in model training time (compared to running on CPU with 8 threads).

Data partitioning: for creating the training, validation, and test data, the CARMA functions utilize the AutoDataPartition() function and utilizes the "timeseries" option for the PartitionType argument which ensures that the train data reflects the furthest points back in time, followed by the validation data, and then the test data which is the most recent in time.

Forecasting: Once the regression model is built, the forecast process replicates the ARIMA process. Once a single step-ahead forecast is made, the lags and moving average features are updated based on the predicted values from scoring the model. Next, the rest of the other features are updated. Then the next forecast step is made, rinse and repeat for remaining forecasting steps. This process utilizes the RemixAutoML functions:

  • AutoCatBoostScoring()
  • AutoXGBoostScoring()
  • AutoLightGBMScoring()
  • AutoH2oMLScoring()

TimeSeriesFill() TimeSeriesFill() is a function that will zero pad (currently only zero pad) a time series data set (not transactional data). There are four ways to use this function: Choose from:

  • maxmax - Fill from the absolute min date to the absolute max date (single series and panel data)
  • minmax - Fill from the max date of the min set to the absolute max date (panel data)
  • maxmin - Fill from the absolute min date to the min of the max dates (panel data)
  • minmin - Fill from the max date of the min dates to the min date of the max dates (panel data)

Code Example: AutoCatBoostCARMA()

```r

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

Out-of-Sample Feature + Grid Tuning of RemixAutoML::AutoCatBoostCARMA()

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

Set up your output file path for saving results as a .csv

Path

Code Example: AutoCatBoostVectorCARMA()

```r

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

ML-Based Vector AutoRegression CARMA ----

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

Load Walmart Data from Dropbox----

data

Code Example: AutoXGBoostCARMA()

```r

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

XGBoost Version ----

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

Load data

data

Code Example: AutoLightGBMCARMA()

```r

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

LightGBM Version ----

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

Load data

data

Code Example: AutoH2OCARMA()

```r

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

H2O Version ----

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

Load data

data

Intermittent Demand Forecasting

Expand to view content

Funnel Forecasting Description

Background The term intermittend demand forecasting is the process of forecasting entities that do not always have positive demand for each period in your data. This is a pretty broad field. Some entities could simply be missing demand for a few periods out of thousands while others could only have a few demand instances over a three year window. The functions in RemixAutoML allow you to forecast these types of processes for single series and grouped series. The available functions utilize CatBoost, LightGBM, and XGBoost. I don't think the functions here can handle every possible case of intermittend demand forecasting but I always have them available to test out for these types of projects. What's unique about these functions are that they deploy two machine learning models in an autoregressive forecasting style process. Under the hood both models are encapsulated by the Auto__HurdleModel() and Auto__HurdleModelScoring() functions. They deploy a classifier and a regression model. Both are combined to generate predictions that are hurdle in nature (opposed to zero inflated). They are appropriate because the classifier will determine if a period has zero demand and the regression is used to determine the size of the demand instance assuming one occurs.

Feature engineering Feature engineering matches that of the CARMA suite (more detail in that section of the README).

Data structure Data structure matches that of the CARMA suite. Both the regular Panel CARMA and Hurdle CARMA models will zero-fill in any periods that are missing due to zero demand but I expect there to be many more missing periods for Hurdle CARMA. The Panel CARMA models can also handle periods with no demand and you should feel free to also test out those models for your intermittent demand forecasting project.

API The API is near identical to the Panel CARMA models except that ML Args require a list() to be supplied that holds both the argument for the classifier and the regression model. E.g. list('classifier' = 100, 'regression' = 100). The list does need to be named and it is case sensitive. Both 'classifier' and 'regression' are lower case.

ML parameters Similarly to the other ML functions, most ML args are exposed with the functions so you can tune them in a ton of ways. You can also run them with a GPU if you've installed the GPU versions of the packages (relevant for XGBoost and LightGBM).

Usage for business Most intermittend demand forecasting I've done professionally were for supply chain use cases. These use cases typically involved some sort of optimization engine to tell procurement specialists when to buy and how much to buy when they do buy. In order to build a robust optimization engine, you will have to generate prediction intervals for the forecast. The CARMA suite does not provide those off the shelf but they are actually pretty easy to obtain. When you run the function with TrainOnFull = FALSE, you will have test data returned with predictions and you can use that data to understand the variability in predictions and proceed from there in generating your prediction intervals.

Code Example: AutoCatBoostHurdleCARMA()

```r

Load Walmart Data from Dropbox----

data

Code Example: AutoXGBoostHurdleCARMA()

```r

Load Walmart Data from Dropbox----

data

Code Example: AutoLightGBMHurdleCARMA()

```r

Load Walmart Data from Dropbox----

data

Funnel Forecasting

Expand to view content

Funnel Forecasting Description

Background The term funnel forecasting is the process of forecasting the periods out from cohort start dates and across calendar time. The functions in RemixAutoML allow you to forecast these types of processes for single series and grouped series. The available functions utilize CatBoost, LightGBM, and XGBoost. Typically these forecasting projects are centered around the sales funnel but could be applied to any cohort style data structure. There are two primiary reasons to utilize the functions in this package over some alternatives out there. One, they utilize machine learning algorithms whereas the alternative methods only utilize glm's at best, and simple averaging more commonly. Second, there are tons of feature engineering opportunities with this kind of data structure that are altogether ignored with other methods.

Feature engineering The feature engineering that go into these functions include calendar and cohort date features (e.g. day of week, week of month, month of year, etc.), holiday features for both calendar and cohort dates, and time series features that cover both calendar and cohort dates (lags and rollings stats). The lags and rolling stats across cohort dates is what makes these functions really unique. In the Panel CARMA functions in RemixAutoML, lags and rolling stats are generated for calendar time. Here, I also take advantage of cohort time. There are also automatic categorical encoding methods for LightGBM and XGBoost for categorical variables. CatBoost handles categorical variables internally. There are also automatic transformations that can be utilized and the functions manage the conversion and backtransform for you automatically. XREGS (exogenous variables) are also permitted and they must be attached to the base funnel data. The XREGS need to span the entire forecast horizon.

Data structure Typical data sets begin with some sort of base funnel measure, such as leads. The conversion measures of interest typically include sales or intermediate steps between leads and sales. What the functions do internally is predict the conversion rates across cohort time and calendar time. Once all periods are forecasted, the conversion measure is also computed. Model insights are saved to file so you can inspect the driving factors to the cohort process and the model performance measures.

The data structure the functions expect will have columns such as, 'CalendarDateColumn', 'CohortDateColumn', 'CohortPeriodsOut', 'Leads', 'Appointments'. If you have group variables, they would also be columns. The data should be in long format - this means that for every 'CalendarDateColumn' there will be a bunch of corresponding 'CohortDateColumn' dates values. This makes sense since for each cohort there will be many periods out where conversion measures are generated. The CohortPeriodsVariable are values that represent the number of numeric units from the cohort date base value. Exmaple - if a single cohort is for the calendar date '2020-01-01' and the corresponding cohort date is '2020-01-10' then the CohortPeriodsVariable will have a value of 10 (numeric or integer).

API For this forecasting use case I split out training and forecasting process into two separate functions for each ML method. Auto__FunnelCARMA() (for model training) and Auto__FunnelCARMAScoring() (forecasting) are the two methods to be aware of.

ML parameters Similarly to the other ML functions, most ML args are exposed with the functions so you can tune them in a ton of ways. You can also run them with a GPU if you've installed the GPU versions of the packages (relevant for XGBoost and LightGBM).

Usage for business There are several additional benefits of forecasting using the Funnel models vs converting the data to standard panel data structures. Business groups are often interesting in individual cohorts and they utilize that information for not only planning but also to adjust strategies and identify issues with existing strategies. Anomaly detection can also be conducted by comparing forecasts to actuals when new data is made available, which is another way to help the business get ahead of issues before they become significant.

CatBoost Funnel Example

AutoCatBoostFunnelCARMA() and AutoCatBoostFunnelCARMAScoring()

```r

Create Fake Data

data

LightGBM Funnel Example

AutoLightGBMFunnelCARMA() and AutoLightGBMFunnelCARMAScoring()

```r

Create Fake Data

data

XGBoost Funnel Example

AutoXGBoostFunnelCARMA() and AutoXGBoostFunnelCARMAScoring()

```r

Create Fake Data

data

Time Series Forecasting

Expand to view content

Time Series Forecasting Description

There are three sets of functions for single series traditional time series model forecasting. The first set includes the AutoBanditSarima() and AutoBanditNNet() functions. These two offer the most robust fitting strategies. The utilize a multi-armed-bandit to help narrow the search space of available parameter settings. The next batch includes the AutoTBATS(), AutoETS(), and the AutoArfima() functions. These don't utilze the bandit framework. Rather, they run through a near exhaustive search through all their possible settings. Both the bandit set and the non-bandit set utilize parallelism to burn through as many models as possible for a fixed amount of time. The third set includes the AutoTS() function. This function will run through ARIMA, ARFIMA, NNet, ETS, TBATS, TSLM, and DSHW models with a few variation of parameter settings. The best possible model of the set will likely come from the AutoBanditSarima() function but it's never a bad idea to start with a model search using AutoTS() and then running through the others. This way you can speak to different algorithm performance and the benefits of the selected model.

  • Bandit: AutoBanditSarima
  • Bandit: AutoBanditNNet
  • Exhaustive: AutoTBATS
  • Exhaustive: AutoETS
  • Exhaustive: AutoArfima
  • Best of forecast package: AutoTS

AutoBanditSarima() Example

```r

Build model

data

AutoBanditNNet() Example

```r

Build model

data

AutoTBATS() Example

```r

Build model

data

AutoETS() Example

```r

Build model

data

AutoArfima() Example

```r

Build model

data

AutoTS() Example

```r

Build model

data

Recommender Systems

Expand to view content

Functions Description

AutoRecommenderDataCreate() automatically creates your binary ratings matix from transaction data

AutoRecommenderTrain() automated collaborative filtering modeling where each model below competes against one another for top performance

  • RandomItems
  • PopularItems
  • UserBasedCF
  • ItemBasedCF
  • AssociationRules

AutoRecommenderScore() automatically score a recommender model from AutoRecommenderTrain()

AutoMarketBasketModel() is a function that runs a market basket analysis automatically. It will convert your data, run the algorithm, and generate the recommended items. On top of that, it includes additional significance values not provided by the source pacakge.

AutoRecommenderDataCreate() Example

```r

data comes in transactional form as a data.table

RatingsMatrix

AutoRecommenderTrain() Example

```r

Build Model

WinningModel

AutoRecommenderScore() Example

```r

Score new data

Results

AutoMarketBasketModel() Example

```r

Build rules set

RulesData

Database Management

Expand to view content

AutoDataDictionary() will pull back data dictionary data from a sql server data warehouse. There are several data dictionary types that can be returned, such as returning every table that exists along with every column with metadata information. Another good one is to pull back all tables and their counterparts that can be used in joins, along with the joining sql.

SQL_Server_DBConnection() Create a connect with sql server

SQL_Query_Push() Push data to a sql server warehouse

SQL_Query() Query a sql server table

SQL_ClearTable() Deletes all rows of a sql server table

SQL_DropTable() Removes a sql server table

SQL_SaveTable() Write a sql server table

Utilities EDA and Misc

Expand to view content

UserBaseEvolution() Create an output table of accumulated unique user counts, active unique user counts, new unique user counts, retained unique user counts, churned unique user counts, and reactivated unique user counts. You can specify the aggregation window (Month, Week, and Day) and you can specify the Churn Window.

EDA_Histograms() Create histograms and bar plots for all columns from a data.table, or a subset provided by the user to the PlotColumns arg.

ScatterCopula() Creates a copula-based scatterplot between two numeric variables. Additionally, a gam fit is added to the plot.

PlotGUI() Spins up the esquisse shiny app for generating quick plots. You need to have that package installed to utilize it. I did not include it as a dependency for the package.

AutoWordFreq() creates a word frequency data.table and a word cloud

RemixTheme() is a specific font, set of colors, and style for plots.

ChartTheme() is a specific font, set of colors, and style for plots.

multiplot() is useful for displaying multiple plots in a single pane. I've never had luck using grid so I just use this instead.

FakeDataGenerator() I use this to create fake data for the examples in the function help files

DeleteFile() If you are on a windows machine, you can use this functio to delete a file such that you don't have to delete it again from the recycle bin.

PrintToPDF() You can have plots or tables printed to pdf with this function. It will manage the grDevices for you.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.