GithubHelp home page GithubHelp logo

jackhopkins / cortex Goto Github PK

View Code? Open in Web Editor NEW

This project forked from cortexlabs/cortex

0.0 1.0 0.0 8.06 MB

Run inference at scale

Home Page: https://cortex.dev

License: Apache License 2.0

Shell 3.24% Go 68.32% Dockerfile 0.84% Python 16.10% Makefile 0.39% HTML 0.88% Jupyter Notebook 10.22%

cortex's Introduction


Run inference at scale

Cortex is an open source platform for large-scale inference workloads.


Model serving infrastructure

  • Supports deploying TensorFlow, PyTorch, and other models as realtime or batch APIs.
  • Ensures high availability with availability zones and automated instance restarts.
  • Runs inference on on-demand instances or spot instances with on-demand backups.
  • Autoscales to handle production workloads with support for overprovisioning.

Configure a cluster

# cluster.yaml

region: us-east-1
instance_type: g4dn.xlarge
min_instances: 10
max_instances: 100
spot: true

Spin up on your AWS or GCP account

$ cortex cluster up --config cluster.yaml

○ configuring autoscaling ✓
○ configuring networking ✓
○ configuring logging ✓

cortex is ready!

Reproducible deployments

  • Package dependencies, code, and configuration for reproducible deployments.
  • Configure compute, autoscaling, and networking for each API.
  • Integrate with your data science platform or CI/CD system.
  • Deploy custom Docker images or use the pre-built defaults.

Define an API

# predictor.py

from transformers import pipeline

class PythonPredictor:
    def __init__(self, config):
        self.model = pipeline(task="text-generation")

    def predict(self, payload):
        return self.model(payload["text"])[0]

Configure an API

# text_generator.yaml

- name: text-generator
  kind: RealtimeAPI
  predictor:
    type: python
    path: predictor.py
  compute:
    gpu: 1
    mem: 8Gi
  autoscaling:
    min_replicas: 1
    max_replicas: 10

Scalable machine learning APIs

  • Scale to handle production workloads with request-based autoscaling.
  • Stream performance metrics and logs to any monitoring tool.
  • Serve many models efficiently with multi-model caching.
  • Use rolling updates to update APIs without downtime.
  • Configure traffic splitting for A/B testing.

Deploy to your cluster

$ cortex deploy text_generator.yaml

# creating http://example.com/text-generator

Consume your API

$ curl http://example.com/text-generator -X POST -H "Content-Type: application/json" -d '{"text": "hello world"}'

Get started

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.