Github url

tpot

by EpistasisLab

EpistasisLab /tpot

A Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic pro...

7.3K Stars 1.3K Forks Last release: about 1 month ago (v0.11.5) GNU Lesser General Public License v3.0 2.3K Commits 24 Releases

Available items

No Items, yet!

The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:

Master status: Master Build Status - Mac/LinuxMaster Build Status - WindowsMaster Coverage Status

Development status: Development Build Status - Mac/LinuxDevelopment Build Status - WindowsDevelopment Coverage Status

Package information: Python 3.7License: LGPL v3PyPI version

TPOT stands for Tree-based Pipeline Optimization Tool. Consider TPOT your Data Science Assistant. TPOT is a Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.

TPOT Demo

TPOT will automate the most tedious part of machine learning by intelligently exploring thousands of possible pipelines to find the best one for your data.

An example Machine Learning pipeline

An example Machine Learning pipeline

Once TPOT is finished searching (or you get tired of waiting), it provides you with the Python code for the best pipeline it found so you can tinker with the pipeline from there.

An example TPOT pipeline

TPOT is built on top of scikit-learn, so all of the code it generates should look familiar... if you're familiar with scikit-learn, anyway.

TPOT is still under active development and we encourage you to check back on this repository regularly for updates.

For further information about TPOT, please see the project documentation.

License

Please see the repository license for the licensing and usage information for TPOT.

Generally, we have licensed TPOT to make it as widely usable as possible.

Installation

We maintain the TPOT installation instructions in the documentation. TPOT requires a working installation of Python.

Usage

TPOT can be used on the command line or with Python code.

Click on the corresponding links to find more information on TPOT usage in the documentation.

Examples

Classification

Below is a minimal working example with the the optical recognition of handwritten digits dataset.

from tpot import TPOTClassifier from sklearn.datasets import load\_digits from sklearn.model\_selection import train\_test\_split digits = load\_digits() X\_train, X\_test, y\_train, y\_test = train\_test\_split(digits.data, digits.target, train\_size=0.75, test\_size=0.25, random\_state=42) tpot = TPOTClassifier(generations=5, population\_size=50, verbosity=2, random\_state=42) tpot.fit(X\_train, y\_train) print(tpot.score(X\_test, y\_test)) tpot.export('tpot\_digits\_pipeline.py')

Running this code should discover a pipeline that achieves about 98% testing accuracy, and the corresponding Python code should be exported to the

tpot\_digits\_pipeline.py

file and look similar to the following:

import numpy as np import pandas as pd from sklearn.ensemble import RandomForestClassifier from sklearn.linear\_model import LogisticRegression from sklearn.model\_selection import train\_test\_split from sklearn.pipeline import make\_pipeline, make\_union from sklearn.preprocessing import PolynomialFeatures from tpot.builtins import StackingEstimator from tpot.export\_utils import set\_param\_recursive # NOTE: Make sure that the outcome column is labeled 'target' in the data file tpot\_data = pd.read\_csv('PATH/TO/DATA/FILE', sep='COLUMN\_SEPARATOR', dtype=np.float64) features = tpot\_data.drop('target', axis=1) training\_features, testing\_features, training\_target, testing\_target = \ train\_test\_split(features, tpot\_data['target'], random\_state=42) # Average CV score on the training set was: 0.9799428471757372 exported\_pipeline = make\_pipeline( PolynomialFeatures(degree=2, include\_bias=False, interaction\_only=False), StackingEstimator(estimator=LogisticRegression(C=0.1, dual=False, penalty="l1")), RandomForestClassifier(bootstrap=True, criterion="entropy", max\_features=0.35000000000000003, min\_samples\_leaf=20, min\_samples\_split=19, n\_estimators=100) ) # Fix random state for all the steps in exported pipeline set\_param\_recursive(exported\_pipeline.steps, 'random\_state', 42) exported\_pipeline.fit(training\_features, training\_target) results = exported\_pipeline.predict(testing\_features)

Regression

Similarly, TPOT can optimize pipelines for regression problems. Below is a minimal working example with the practice Boston housing prices data set.

from tpot import TPOTRegressor from sklearn.datasets import load\_boston from sklearn.model\_selection import train\_test\_split housing = load\_boston() X\_train, X\_test, y\_train, y\_test = train\_test\_split(housing.data, housing.target, train\_size=0.75, test\_size=0.25, random\_state=42) tpot = TPOTRegressor(generations=5, population\_size=50, verbosity=2, random\_state=42) tpot.fit(X\_train, y\_train) print(tpot.score(X\_test, y\_test)) tpot.export('tpot\_boston\_pipeline.py')

which should result in a pipeline that achieves about 12.77 mean squared error (MSE), and the Python code in

tpot\_boston\_pipeline.py

should look similar to:

import numpy as np import pandas as pd from sklearn.ensemble import ExtraTreesRegressor from sklearn.model\_selection import train\_test\_split from sklearn.pipeline import make\_pipeline from sklearn.preprocessing import PolynomialFeatures from tpot.export\_utils import set\_param\_recursive # NOTE: Make sure that the outcome column is labeled 'target' in the data file tpot\_data = pd.read\_csv('PATH/TO/DATA/FILE', sep='COLUMN\_SEPARATOR', dtype=np.float64) features = tpot\_data.drop('target', axis=1) training\_features, testing\_features, training\_target, testing\_target = \ train\_test\_split(features, tpot\_data['target'], random\_state=42) # Average CV score on the training set was: -10.812040755234403 exported\_pipeline = make\_pipeline( PolynomialFeatures(degree=2, include\_bias=False, interaction\_only=False), ExtraTreesRegressor(bootstrap=False, max\_features=0.5, min\_samples\_leaf=2, min\_samples\_split=3, n\_estimators=100) ) # Fix random state for all the steps in exported pipeline set\_param\_recursive(exported\_pipeline.steps, 'random\_state', 42) exported\_pipeline.fit(training\_features, training\_target) results = exported\_pipeline.predict(testing\_features)

Check the documentation for more examples and tutorials.

Contributing to TPOT

We welcome you to check the existing issues for bugs or enhancements to work on. If you have an idea for an extension to TPOT, please file a new issue so we can discuss it.

Before submitting any contributions, please review our contribution guidelines.

Having problems or have questions about TPOT?

Please check the existing open and closed issues to see if your issue has already been attended to. If it hasn't, file a new issue on this repository so we can review your issue.

Citing TPOT

If you use TPOT in a scientific publication, please consider citing at least one of the following papers:

Trang T. Le, Weixuan Fu and Jason H. Moore (2020). Scaling tree-based automated machine learning to biomedical big data with a feature set selector. Bioinformatics.36(1): 250-256.

BibTeX entry:

@article{le2020scaling, title={Scaling tree-based automated machine learning to biomedical big data with a feature set selector}, author={Le, Trang T and Fu, Weixuan and Moore, Jason H}, journal={Bioinformatics}, volume={36}, number={1}, pages={250--256}, year={2020}, publisher={Oxford University Press} }

Randal S. Olson, Ryan J. Urbanowicz, Peter C. Andrews, Nicole A. Lavender, La Creis Kidd, and Jason H. Moore (2016). Automating biomedical data science through tree-based pipeline optimization. Applications of Evolutionary Computation, pages 123-137.

BibTeX entry:

@inbook{Olson2016EvoBio, author={Olson, Randal S. and Urbanowicz, Ryan J. and Andrews, Peter C. and Lavender, Nicole A. and Kidd, La Creis and Moore, Jason H.}, editor={Squillero, Giovanni and Burelli, Paolo}, chapter={Automating Biomedical Data Science Through Tree-Based Pipeline Optimization}, title={Applications of Evolutionary Computation: 19th European Conference, EvoApplications 2016, Porto, Portugal, March 30 -- April 1, 2016, Proceedings, Part I}, year={2016}, publisher={Springer International Publishing}, pages={123--137}, isbn={978-3-319-31204-0}, doi={10.1007/978-3-319-31204-0\_9}, url={http://dx.doi.org/10.1007/978-3-319-31204-0\_9} }

Randal S. Olson, Nathan Bartley, Ryan J. Urbanowicz, and Jason H. Moore (2016). Evaluation of a Tree-based Pipeline Optimization Tool for Automating Data Science. Proceedings of GECCO 2016, pages 485-492.

BibTeX entry:

@inproceedings{OlsonGECCO2016, author = {Olson, Randal S. and Bartley, Nathan and Urbanowicz, Ryan J. and Moore, Jason H.}, title = {Evaluation of a Tree-based Pipeline Optimization Tool for Automating Data Science}, booktitle = {Proceedings of the Genetic and Evolutionary Computation Conference 2016}, series = {GECCO '16}, year = {2016}, isbn = {978-1-4503-4206-3}, location = {Denver, Colorado, USA}, pages = {485--492}, numpages = {8}, url = {http://doi.acm.org/10.1145/2908812.2908918}, doi = {10.1145/2908812.2908918}, acmid = {2908918}, publisher = {ACM}, address = {New York, NY, USA}, }

Alternatively, you can cite the repository directly with the following DOI:

DOI

Support for TPOT

TPOT was developed in the Computational Genetics Lab at the University of Pennsylvania with funding from the NIH under grant R01 AI117694. We are incredibly grateful for the support of the NIH and the University of Pennsylvania during the development of this project.

The TPOT logo was designed by Todd Newmuis, who generously donated his time to the project.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.