Deploy machine learning in production
The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:
Define any real-time or batch inference pipeline as simple Python APIs, regardless of framework.
from transformers import pipeline
class PythonPredictor: def init(self, config): self.model = pipeline(task="text-generation")
def predict(self, payload): return self.model(payload["text"])
Configure autoscaling, monitoring, compute resources, update strategies, and more.
Handle traffic with request-based autoscaling. Minimize spend with spot instances and multi-model APIs.
$ cortex get text-generator
status last-update replicas requests latency live 10h 10 100000 100ms
Integrate Cortex with any data science platform and CI/CD tooling, without changing your workflow.
import tensorflow import torch import transformers import mlflow
Run Cortex on your AWS account (GCP support is coming soon), maintaining control over resource utilization and data access.
region: us-west-2 instance_type: g4dn.xlarge spot: true min_instances: 1 max_instances: 5
You don't need to bring your own cluster or containerize your models, Cortex automates your cloud infrastructure.
$ cortex cluster up
confguring networking ... configuring logging ... configuring metrics ... configuring autoscaling ...
cortex is ready!
bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/0.20/get-cli.sh)"