Skip to content

A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning

License

Notifications You must be signed in to change notification settings

bj-9527/awesome-production-machine-learning

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Awesome Maintenance GitHub GitHub GitHub GitHub

Awesome production machine learning

This repository contains a curated list of awesome open source libraries that will help you deploy, monitor, version, scale, and secure your production machine learning.

Quick links to sections in this page

🔍 Explaining predictions & models 🔏 Privacy preserving ML 📜 Model & data versioning
🏁 Model Training Orchestration 💪 Model Serving and Monitoring 🤖 Neural Architecture Search
📓 Reproducible Notebooks 📊 Visualisation frameworks 🔠 Industry-strength NLP
🧵 Data pipelines & ETL 🏷️ Data Labelling 📅 Metadata Management
📡 Functions as a service 🗺️ Computation distribution 📥 Model serialisation
🧮 Optimized calculation frameworks 💸 Data Stream Processing 🔴 Outlier and Anomaly Detection
🌀 Feature engineering 🎁 Feature Stores ⚔ Adversarial Robustness
💰 Commercial Platforms 💾 Data Storage Layer

10 Min Video Overview

This 10 minute video provides an overview of the motivations for machine learning operations as well as a high level overview on some of the tools in this repo.

Want to receive recurrent updates on this repo and other advancements?

You can join the Machine Learning Engineer newsletter. You will receive updates on open source frameworks, tutorials and articles curated by machine learning professionals.
Also check out the Awesome Artificial Intelligence Guidelines List, where we aim to map the landscape of "Frameworks", "Codes of Ethics", "Guidelines", "Regulations", etc related to Artificial Intelligence.

Main Content

Explaining Black Box Models and Datasets

  • Aequitas - An open-source bias audit toolkit for data scientists, machine learning researchers, and policymakers to audit machine learning models for discrimination and bias, and to make informed and equitable decisions around developing and deploying predictive risk-assessment tools.
  • Alibi - Alibi is an open source Python library aimed at machine learning model inspection and interpretation. The initial focus on the library is on black-box, instance based model explanations.
  • anchor - Code for the paper "High precision model agnostic explanations", a model-agnostic system that explains the behaviour of complex models with high-precision rules called anchors.
  • captum - model interpretability and understanding library for PyTorch developed by Facebook. It contains general purpose implementations of integrated gradients, saliency maps, smoothgrad, vargrad and others for PyTorch models.
  • casme - Example of using classifier-agnostic saliency map extraction on ImageNet presented on the paper "Classifier-agnostic saliency map extraction".
  • ContrastiveExplanation (Foil Trees) - Python script for model agnostic contrastive/counterfactual explanations for machine learning. Accompanying code for the paper "Contrastive Explanations with Local Foil Trees".
  • DeepLIFT - Codebase that contains the methods in the paper "Learning important features through propagating activation differences". Here is the slides and the video of the 15 minute talk given at ICML.
  • DeepVis Toolbox - This is the code required to run the Deep Visualization Toolbox, as well as to generate the neuron-by-neuron visualizations using regularized optimization. The toolbox and methods are described casually here and more formally in this paper.
  • ELI5 - "Explain Like I'm 5" is a Python package which helps to debug machine learning classifiers and explain their predictions.
  • FACETS - Facets contains two robust visualizations to aid in understanding and analyzing machine learning datasets. Get a sense of the shape of each feature of your dataset using Facets Overview, or explore individual observations using Facets Dive.
  • Fairlearn - Fairlearn is a python toolkit to assess and mitigate unfairness in machine learning models.
  • FairML - FairML is a python toolbox auditing the machine learning models for bias.
  • fairness - This repository is meant to facilitate the benchmarking of fairness aware machine learning algorithms based on this paper.
  • GEBI - Global Explanations for Bias Identification - An attention-based summarized post-hoc explanations for detection and identification of bias in data. We propose a global explanation and introduce a step-by-step framework on how to detect and test bias. Python package for image data.
  • IBM AI Explainability 360 - Interpretability and explainability of data and machine learning models including a comprehensive set of algorithms that cover different dimensions of explanations along with proxy explainability metrics.
  • IBM AI Fairness 360 - A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
  • iNNvestigate - An open-source library for analyzing Keras models visually by methods such as DeepTaylor-Decomposition, PatternNet, Saliency Maps, and Integrated Gradients.
  • Integrated-Gradients - This repository provides code for implementing integrated gradients for networks with image inputs.
  • InterpretML - InterpretML is an open-source package for training interpretable models and explaining blackbox systems.
  • keras-vis - keras-vis is a high-level toolkit for visualizing and debugging your trained keras neural net models. Currently supported visualizations include: Activation maximization, Saliency maps, Class activation maps.
  • L2X - Code for replicating the experiments in the paper "Learning to Explain: An Information-Theoretic Perspective on Model Interpretation" at ICML 2018
  • Lightwood - A Pytorch based framework that breaks down machine learning problems into smaller blocks that can be glued together seamlessly with an objective to build predictive models with one line of code.
  • LIME - Local Interpretable Model-agnostic Explanations for machine learning models.
  • LOFO Importance - LOFO (Leave One Feature Out) Importance calculates the importances of a set of features based on a metric of choice, for a model of choice, by iteratively removing each feature from the set, and evaluating the performance of the model, with a validation scheme of choice, based on the chosen metric.
  • MindsDB - MindsDB is an Explainable AutoML framework for developers. With MindsDB you can build, train and use state of the art ML models in as simple as one line of code.
  • mljar-supervised - An Automated Machine Learning (AutoML) python package for tabular data. It can handle: Binary Classification, MultiClass Classification and Regression. It provides feature engineering, explanations and markdown reports.
  • NETRON - Viewer for neural network, deep learning and machine learning models.
  • pyBreakDown - A model agnostic tool for decomposition of predictions from black boxes. Break Down Table shows contributions of every variable to a final prediction.
  • rationale - Code to implement learning rationales behind predictions with code for paper "Rationalizing Neural Predictions"
  • responsibly - Toolkit for auditing and mitigating bias and fairness of machine learning systems
  • SHAP - SHapley Additive exPlanations is a unified approach to explain the output of any machine learning model.
  • Skater - Skater is a unified framework to enable Model Interpretation for all forms of model to help one build an Interpretable machine learning system often needed for real world use-cases
  • Tensorboard's Tensorboard WhatIf - Tensorboard screen to analyse the interactions between inference results and data inputs.
  • Tensorflow's cleverhans - An adversarial example library for constructing attacks, building defenses, and benchmarking both. A python library to benchmark system's vulnerability to adversarial examples
  • tensorflow's lucid - Lucid is a collection of infrastructure and tools for research in neural network interpretability.
  • tensorflow's Model Analysis - TensorFlow Model Analysis (TFMA) is a library for evaluating TensorFlow models. It allows users to evaluate their models on large amounts of data in a distributed manner, using the same metrics defined in their trainer.
  • themis-ml - themis-ml is a Python library built on top of pandas and sklearn that implements fairness-aware machine learning algorithms.
  • Themis - Themis is a testing-based approach for measuring discrimination in a software system.
  • TreeInterpreter - Package for interpreting scikit-learn's decision tree and random forest predictions. Allows decomposing each prediction into bias and feature contribution components as described in http://blog.datadive.net/interpreting-random-forests/.
  • woe - Tools for WoE Transformation mostly used in ScoreCard Model for credit rating
  • XAI - eXplainableAI - An eXplainability toolbox for machine learning.

Privacy Preserving Machine Learning

  • Google's Differential Privacy - This is a C++ library of ε-differentially private algorithms, which can be used to produce aggregate statistics over numeric data sets containing private or sensitive information.
  • Intel Homomorphic Encryption Backend - The Intel HE transformer for nGraph is a Homomorphic Encryption (HE) backend to the Intel nGraph Compiler, Intel's graph compiler for Artificial Neural Networks.
  • Microsoft SEAL - Microsoft SEAL is an easy-to-use open-source (MIT licensed) homomorphic encryption library developed by the Cryptography Research group at Microsoft.
  • PySyft - A Python library for secure, private Deep Learning. PySyft decouples private data from model training, using Multi-Party Computation (MPC) within PyTorch.
  • Substra - Substra is an open-source framework for privacy-preserving, traceable and collaborative Machine Learning.
  • Tensorflow Privacy - A Python library that includes implementations of TensorFlow optimizers for training machine learning models with differential privacy.
  • TF Encrypted - A Framework for Confidential Machine Learning on Encrypted Data in TensorFlow.
  • Uber SQL Differencial Privacy - Uber's open source framework that enforces differential privacy for general-purpose SQL queries.

Model and Data Versioning

  • Apache Marvin is a platform for model deployment and versioning that hides all complexity under the hood: data scientists just need to set up the server and write their code in an extended jupyter notebook.
  • Catalyst - High-level utils for PyTorch DL & RL research. It was developed with a focus on reproducibility, fast experimentation and code/ideas reusing.
  • D6tflow - A python library that allows for building complex data science workflows on Python.
  • Data Version Control (DVC) - A git fork that allows for version management of models.
  • FGLab - Machine learning dashboard, designed to make prototyping experiments easier.
  • Flor - Easy to use logger and automatic version controller made for data scientists who write ML code
  • Hangar - Version control for tensor data, git-like semantics on numerical data with high speed and efficiency.
  • MLflow - Open source platform to manage the ML lifecycle, including experimentation, reproducibility and deployment.
  • MLWatcher - MLWatcher is a python agent that records a large variety of time-serie metrics of your running ML classification algorithm. It enables you to monitor in real time.
  • ModelChimp - Framework to track and compare all the results and parameters from machine learning models (Video)
  • ModelDB - An open-source system to version machine learning models including their ingredients code, data, config, and environment and to track ML metadata across the model lifecycle.
  • Pachyderm - Open source distributed processing framework build on Kubernetes focused mainly on dynamic building of production machine learning pipelines - (Video)
  • Polyaxon - A platform for reproducible and scalable machine learning and deep learning on kubernetes. - (Video)
  • PredictionIO - An open source Machine Learning Server built on top of a state-of-the-art open source stack for developers and data scientists to create predictive engines for any machine learning task
  • Quilt Data - Versioning, reproducibility and deployment of data and models.
  • Sacred - Tool to help you configure, organize, log and reproduce machine learning experiments.
  • steppy - Lightweight, Python3 library for fast and reproducible machine learning experimentation. Introduces simple interface that enables clean machine learning pipeline design.
  • Studio.ML - Model management framework which minimizes the overhead involved with scheduling, running, monitoring and managing artifacts of your machine learning experiments.
  • TRAINS - Auto-Magical Experiment Manager & Version Control for AI.

Model Training Orchestration

  • CML - Continuous Machine Learning (CML) is an open-source library for implementing continuous integration & delivery (CI/CD) in machine learning projects.
  • Determined - Deep learning training platform with integrated support for distributed training, hyperparameter tuning, and model management (supports Tensorflow and Pytorch).
  • Hopsworks - Hopsworks is a data-intensive platform for the design and operation of machine learning pipelines that includes a Feature Store. (Video).
  • Kubeflow - A cloud native platform for machine learning based on Google’s internal machine learning pipelines.
  • MLeap - Standardisation of pipeline and model serialization for Spark, Tensorflow and sklearn
  • NVIDIA TensorRT - TensorRT is a C++ library for high performance inference on NVIDIA GPUs and deep learning accelerators.
  • Onepanel - Production scale vision AI platform, with fully integrated components for model building, automated labeling, data processing and model training pipelines.
  • Open Platform for AI - Platform that provides complete AI model training and resource management capabilities.
  • PyCaret ) - low-code library for training and deploying models (scikit-learn, XGBoost, LightGBM, spaCy)
  • Redis-ML - Module available from unstable branch that supports a subset of ML models as Redis data types. (Replaced by Redis AI)
  • Skaffold - Skaffold is a command line tool that facilitates continuous development for Kubernetes applications. You can iterate on your application source code locally then deploy to local or remote Kubernetes clusters.
  • Tensorflow Extended (TFX) - Production oriented configuration framework for ML based on TensorFlow, incl. monitoring and model version management.

Model Serving and Monitoring

  • BentoML - BentoML is an open source framework for high performance ML model serving
  • Cortex - Cortex is an open source platform for deploying machine learning models—trained with any framework—as production web services. No DevOps required.
  • DeepDetect - Machine Learning production server for TensorFlow, XGBoost and Cafe models written in C++ and maintained by Jolibrain
  • ForestFlow - Cloud-native machine learning model server.
  • Jina - Cloud native search framework that supports to use deep learning/state of the art AI models for search.
  • KFServing - Serverless framework to deploy and monitor machine learning models in Kubernetes - (Video)
  • Model Server for Apache MXNet (MMS) - A model server for Apache MXNet from Amazon Web Services that is able to run MXNet models as well as Gluon models (Amazon's SageMaker runs a custom version of MMS under the hood)
  • NVIDIA TensorRT Inference Server - TensorRT Inference Server is an inference microservice that lets you serve deep learning models in production while maximizing GPU utilization.
  • OpenScoring - REST web service for scoring PMML models built and maintained by OpenScoring.io
  • Redis-AI - A Redis module for serving tensors and executing deep learning models. Expect changes in the API and internals.
  • Seldon Core - Open source platform for deploying and monitoring machine learning models in kubernetes - (Video)
  • Tensorflow Serving - High-performant framework to serve Tensorflow models via grpc protocol able to handle 100k requests per second per core
  • TorchServe - TorchServe is a flexible and easy to use tool for serving PyTorch models.
  • WhyLogs - Lightweight solution for profiling and monitoring your ML data pipeline end-to-end

Adversarial Robustness Libraries

  • AdvBox - generate adversarial examples from the command line with 0 coding using PaddlePaddle, PyTorch, Caffe2, MxNet, Keras, and TensorFlow. Includes 10 attacks and also 6 defenses. Used to implement StealthTshirt at DEFCON!
  • Adversarial DNN Playground - think TensorFlow Playground, but for Adversarial Examples! A visualization tool designed for learning and teaching - the attack library is limited in size, but it has a nice front-end to it with buttons you can press!
  • AdverTorch - library for adversarial attacks / defenses specifically for PyTorch.
  • Alibi Detect - alibi-detect is a Python package focused on outlier, adversarial and concept drift detection. The package aims to cover both online and offline detectors for tabular data, text, images and time series. The outlier detection methods should allow the user to identify global, contextual and collective outliers.
  • Artificial Adversary AirBnB's library to generate text that reads the same to a human but passes adversarial classifiers.
  • CleverHans - library for testing adversarial attacks / defenses maintained by some of the most important names in adversarial ML, namely Ian Goodfellow (ex-Google Brain, now Apple) and Nicolas Papernot (Google Brain). Comes with some nice tutorials!
  • DEEPSEC - another systematic tool for attacking and defending deep learning models.
  • EvadeML - benchmarking and visualization tool for adversarial ML maintained by Weilin Xu, a PhD at University of Virginia, working with David Evans. Has a tutorial on re-implementation of one of the most important adversarial defense papers - feature squeezing (same team).
  • Foolbox - second biggest adversarial library. Has an even longer list of attacks - but no defenses or evaluation metrics. Geared more towards computer vision. Code easier to understand / modify than ART - also better for exploring blackbox attacks on surrogate models.
  • IBM Adversarial Robustness 360 Toolbox (ART) - at the time of writing this is the most complete off-the-shelf resource for testing adversarial attacks and defenses. It includes a library of 15 attacks, 10 empirical defenses, and some nice evaluation metrics. Neural networks only.
  • MIA - A library for running membership inference attacks (MIA) against machine learning models.
  • Nicolas Carlini’s Adversarial ML reading list - not a library, but a curated list of the most important adversarial papers by one of the leading minds in Adversarial ML, Nicholas Carlini. If you want to discover the 10 papers that matter the most - I would start here.
  • Robust ML - another robustness resource maintained by some of the leading names in adversarial ML. They specifically focus on defenses, and ones that have published code available next to papers. Practical and useful.
  • TextFool - plausible looking adversarial examples for text generation.
  • Trickster - Library and experiments for attacking machine learning in discrete domains using graph search.

Neural Architecture Search

Data Science Notebook Frameworks

  • Apache Zeppelin - Web-based notebook that enables data-driven, interactive data analytics and collaborative documents with SQL, Scala and more.
  • Binder - Binder hosts notebooks in an executable environment (for free).
  • H2O Flow - Jupyter notebook-like interface for H2O to create, save and re-use "flows"
  • Hydrogen - A plugin for ATOM that enables it to become a jupyter-notebook-like interface that prints the outputs directly in the editor.
  • Jupyter Notebooks - Web interface python sandbox environments for reproducible development
  • ML Workspace - All-in-one web IDE for machine learning and data science. Combines Jupyter, VS Code, Tensorflow, and many other tools/libraries into one Docker image.
  • Papermill - Papermill is a library for parameterizing notebooks and executing them like Python scripts.
  • Polynote - Polynote is an experimental polyglot notebook environment. Currently, it supports Scala and Python (with or without Spark), SQL, and Vega.
  • RMarkdown - The rmarkdown package is a next generation implementation of R Markdown based on Pandoc.
  • Stencila - Stencila is a platform for creating, collaborating on, and sharing data driven content. Content that is transparent and reproducible.
  • Voilà - Voilà turns Jupyter notebooks into standalone web applications that can e.g. be used as dashboards.

Industrial Strength Visualisation libraries

  • Bokeh - Bokeh is an interactive visualization library for Python that enables beautiful and meaningful visual presentation of data in modern web browsers.
  • Geoplotlib - geoplotlib is a python toolbox for visualizing geographical data and making maps
  • ggplot2 - An implementation of the grammar of graphics for R.
  • matplotlib - A Python 2D plotting library which produces publication-quality figures in a variety of hardcopy formats and interactive environments across platforms.
  • Missingno - missingno provides a small toolset of flexible and easy-to-use missing data visualizations and utilities that allows you to get a quick visual summary of the completeness (or lack thereof) of your dataset.
  • PDPBox - This repository is inspired by ICEbox. The goal is to visualize the impact of certain features towards model prediction for any supervised learning algorithm. (now support all scikit-learn algorithms)
  • Perspective Streaming pivot visualization via WebAssembly https://perspective.finos.org/
  • Pixiedust - PixieDust is a productivity tool for Python or Scala notebooks, which lets a developer encapsulate business logic into something easy for your customers to consume.
  • Plotly Dash - Dash is a Python framework for building analytical web applications without the need to write javascript.
  • Plotly.py - An interactive, open source, and browser-based graphing library for Python.
  • PyCEbox - Python Individual Conditional Expectation Plot Toolbox
  • pygal - pygal is a dynamic SVG charting library written in python
  • Redash - Redash is anopen source visualisation framework that is built to allow easy access to big datasets leveraging multiple backends.
  • seaborn - Seaborn is a Python visualization library based on matplotlib. It provides a high-level interface for drawing attractive statistical graphics.
  • Streamlit - Streamlit lets you create apps for your machine learning projects with deceptively simple Python scripts. It supports hot-reloading, so your app updates live as you edit and save your file
  • XKCD-style plots - An XKCD theme for matblotlib visualisations
  • yellowbrick - yellowbrick is a matplotlib-based model evaluation plots for scikit-learn and other machine learning libraries.

Industrial Strength NLP

  • Blackstone - Blackstone is a spaCy model and library for processing long-form, unstructured legal text. Blackstone is an experimental research project from the Incorporated Council of Law Reporting for England and Wales' research lab, ICLR&D.
  • CTRL - A Conditional Transformer Language Model for Controllable Generation released by SalesForce
  • Facebook's XLM - PyTorch original implementation of Cross-lingual Language Model Pretraining which includes BERT, XLM, NMT, XNLI, PKM, etc.
  • Flair - Simple framework for state-of-the-art NLP developed by Zalando which builds directly on PyTorch.
  • Github's Semantic - Github's text library for parsing, analyzing, and comparing source code across many languages .
  • GluonNLP - GluonNLP is a toolkit that enables easy text preprocessing, datasets loading and neural models building to help you speed up your Natural Language Processing (NLP) research.
  • GNES - Generic Neural Elastic Search is a cloud-native semantic search system based on deep neural networks.
  • Grover - Grover is a model for Neural Fake News -- both generation and detection. However, it probably can also be used for other generation tasks.
  • Kashgari - Kashgari is a simple and powerful NLP Transfer learning framework, build a state-of-art model in 5 minutes for named entity recognition (NER), part-of-speech tagging (PoS), and text classification tasks.
  • OpenAI GPT-2 - OpenAI's code from their paper "Language Models are Unsupervised Multitask Learners".
  • sense2vec - A Pytorch library that allows for training and using sense2vec models, which are models that leverage the same approach than word2vec, but also leverage part-of-speech attributes for each token, which allows it to be "meaning-aware"
  • Snorkel - Snorkel is a system for quickly generating training data with weak supervision https://snorkel.org.
  • SpaCy - Industrial-strength natural language processing library built with python and cython by the explosion.ai team.
  • Stable Baselines - A fork of OpenAI Baselines, implementations of reinforcement learning algorithms http://stable-baselines.readthedocs.io/.
  • Tensorflow Lingvo - A framework for building neural networks in Tensorflow, particularly sequence models. Lingvo: A TensorFlow Framework for Sequence Modeling.
  • Tensorflow Text - TensorFlow Text provides a collection of text related classes and ops ready to use with TensorFlow 2.0.
  • Wav2Letter++ - A speech to text system developed by Facebook's FAIR teams.
  • YouTokenToMe - YouTokenToMe is an unsupervised text tokenizer focused on computational efficiency. It currently implements fast Byte Pair Encoding (BPE) [Sennrich et al.].
  • 🤗 Transformers - Huggingface's library of state-of-the-art pretrained models for Natural Language Processing (NLP).

Data Pipeline ETL Frameworks

  • Apache Airflow - Data Pipeline framework built in Python, including scheduler, DAG definition and a UI for visualisation
  • Apache Nifi - Apache NiFi was made for dataflow. It supports highly configurable directed graphs of data routing, transformation, and system mediation logic.
  • Argo Workflows - Argo Workflows is an open source container-native workflow engine for orchestrating parallel jobs on Kubernetes. Argo Workflows is implemented as a Kubernetes CRD (Custom Resource Definition).
  • Azkaban - Azkaban is a batch workflow job scheduler created at LinkedIn to run Hadoop jobs. Azkaban resolves the ordering through job dependencies and provides an easy to use web user interface to maintain and track your workflows.
  • Basin - Visual programming editor for building Spark and PySpark pipelines
  • Bonobo - ETL framework for Python 3.5+ with focus on simple atomic operations working concurrently on rows of data
  • Chronos - More of a job scheduler for Mesos than ETL pipeline. [OUTDATED]
  • Couler - Unified interface for constructing and managing machine learning workflows on different workflow engines, such as Argo Workflows, Tekton Pipelines, and Apache Airflow.
  • Dagster - A data orchestrator for machine learning, analytics, and ETL.
  • Genie - Job orchestration engine to interface and trigger the execution of jobs from Hadoop-based systems
  • Gokart - Wrapper of the data pipeline Luigi
  • Kedro - Kedro is a workflow development tool that helps you build data pipelines that are robust, scalable, deployable, reproducible and versioned. Visualization of the kedro workflows can be done by kedro-viz
  • Luigi - Luigi is a Python module that helps you build complex pipelines of batch jobs, handling dependency resolution, workflow management, visualisation, etc
  • Metaflow - A framework for data scientists to easily build and manage real-life data science projects.
  • Neuraxle - A framework for building neat pipelines, providing the right abstractions to chain your data transformation and prediction steps with data streaming, as well as doing hyperparameter searches (AutoML).
  • Oozie - Workflow scheduler for Hadoop jobs
  • PipelineX - Based on Kedro and MLflow. Full comparison given at https://github.com/Minyus/Python_Packages_for_Pipeline_Workflow
  • Prefect Core - Workflow management system that makes it easy to take your data pipelines and add semantics like retries, logging, dynamic mapping, caching, failure notifications, and more.

Data Labelling Tools and Frameworks

  • COCO Annotator - Web-based image segmentation tool for object detection, localization and keypoints
  • Computer Vision Annotation Tool (CVAT) - OpenCV's web-based annotation tool for both VIDEOS and images for computer algorithms.
  • Doccano - Open source text annotation tools for humans, providing functionality for sentiment analysis, named entity recognition, and machine translation.
  • ImageTagger - Image labelling tool with support for collaboration, supporting bounding box, polygon, line, point labelling, label export, etc.
  • ImgLab - Image annotation tool for bounding boxes with auto-suggestion and extensibility for plugins.
  • Label Studio - Multi-domain data labeling and annotation tool with standardized output format
  • Labelimg - Open source graphical image annotation tool writen in Python using QT for graphical interface focusing primarily on bounding boxes.
  • makesense.ai - Free to use online tool for labelling photos. Prepared labels can be downloaded in one of multiple supported formats.
  • MedTagger - A collaborative framework for annotating medical datasets using crowdsourcing.
  • OpenLabeling - Open source tool for labelling images with support for labels, edges, as well as image resizing and zooming in.
  • PixelAnnotationTool - Image annotation tool with ability to "colour" on the images to select labels for segmentation. Process is semi-automated with the watershed marked algorithm of OpenCV
  • Semantic Segmentation Editor - Hitachi's Open source tool for labelling camera and LIDAR data.
  • Superintendent - superintendent provides an ipywidget-based interactive labelling tool for your data.
  • VGG Image Annotator (VIA) - A simple and standalone manual annotation software for image, audio and video. VIA runs in a web browser and does not require any installation or setup.
  • Visual Object Tagging Tool (VOTT) - Microsoft's Open Source electron app for labelling videos and images for object detection models (with active learning functionality)

Metadata Management

  • Amundsen - Amundsen is a metadata driven application for improving the productivity of data analysts, data scientists and engineers when interacting with data.
  • Apache Atlas - Apache Atlas framework is an extensible set of core foundational governance services – enabling enterprises to effectively and efficiently meet their compliance requirements within Hadoop and allows integration with the whole enterprise data ecosystem.
  • DataHub - DataHub is LinkedIn's generalized metadata search & discovery tool.
  • Marquez - Marquez is an open source metadata service for the collection, aggregation, and visualization of a data ecosystem's metadata.
  • Metacat - Metacat is a unified metadata exploration API service. Metacat focusses on solving these three problems: 1) Federate views of metadata systems. 2) Allow arbitrary metadata storage about data sets. 3) Metadata discovery.
  • ML Metadata - a library for recording and retrieving metadata associated with ML developer and data scientist workflows. Also TensorFlow ML Metadata.
  • Model Card Toolkit - streamlines and automates generation of Model Cards

Data Storage Optimisation

Function as a Service Frameworks

  • Apache OpenWhisk - Open source, distributed serverless platform that executes functions in response to events at any scale.
  • Fission - (Early Alpha) Serverless functions as a service framework on Kubernetes
  • Hydrosphere Mist - Serverless proxy for Apache Spark clusters
  • Hydrosphere ML Lambda - Open source model management cluster for deploying, serving and monitoring machine learning models and ad-hoc algorithms with a FaaS architecture
  • KNative Serving - Kubernetes based serverless microservices with "scale-to-zero" functionality.
  • Nuclio - A high-performance serverless framework focused on data, I/O, and compute intensive workloads. It is well integrated with popular data science tools, such as Jupyter and Kubeflow; supports a variety of data and streaming sources; and supports execution over CPUs and GPUs
  • OpenFaaS - Serverless functions framework with RESTful API on Kubernetes

Computation load distribution frameworks

  • Apache Spark MLlib - Apache Spark's scalable machine learning library in Java, Scala, Python and R
  • Beam Apache Beam is a unified programming model for Batch and Streaming https://beam.apache.org/
  • BigDL - Deep learning framework on top of Spark/Hadoop to distribute data and computations across a HDFS system
  • Dask - Distributed parallel processing framework for Pandas and NumPy computations - (Video)
  • DEAP - A novel evolutionary computation framework for rapid prototyping and testing of ideas. It seeks to make algorithms explicit and data structures transparent. It works in perfect harmony with parallelisation mechanisms such as multiprocessing and SCOOP.
  • DeepSpeed - A deep learning optimization library (lightweight PyTorch wrapper) that makes distributed training easy, efficient, and effective.
  • Fiber - Distributed computing library for modern computer clusters from Uber.
  • Hadoop Open Platform-as-a-service (HOPS) - A multi-tenancy open source framework with RESTful API for data science on Hadoop which enables for Spark, Tensorflow/Keras, it is Python-first, and provides a lot of features
  • Horovod - Uber's distributed training framework for TensorFlow, Keras, and PyTorch
  • NumPyWren - Scientific computing framework build on top of pywren to enable numpy-like distributed computations
  • PyWren - Answer the question of the "cloud button" for python function execution. It's a framework that abstracts AWS Lambda to enable data scientists to execute any Python function - (Video)
  • PyTorch Lightning - Lightweight PyTorch research framework that allows you to easily scale your models to GPUs and TPUs and use all the latest best practices, without the engineering boilerplate - (Video)
  • Ray - Ray is a flexible, high-performance distributed execution framework for machine learning (VIDEO)
  • Vespa Vespa is an engine for low-latency computation over large data sets. https://vespa.ai

Model serialisation formats

  • Java PMML API - Java libraries for consuming and producing PMML files containing models from different frameworks, including:
  • MMdnn - Cross-framework solution to convert, visualize and diagnose deep neural network models.
  • Neural Network Exchange Format (NNEF) - A standard format to store models across Torch, Caffe, TensorFlow, Theano, Chainer, Caffe2, PyTorch, and MXNet
  • ONNX - Open Neural Network Exchange Format
  • PFA - Created by the same organisation as PMML, the Predicted Format for Analytics is an emerging standard for statistical models and data transformation engines.
  • PMML - The Predictive Model Markup Language standard in XML - (Video)_

Optimized calculation frameworks

  • CuDF cuDF - GPU DataFrame Library http://rapids.ai
  • CuML cuML - RAPIDS Machine Learning Library
  • CuPy NumPy-like API accelerated with CUDA
  • H2O-3 - Fast scalable Machine Learning platform for smarter applications: Deep Learning, Gradient Boosting & XGBoost, Random Forest, Generalized Linear Modeling (Logistic Regression, Elastic Net), K-Means, PCA, Stacked Ensembles, Automatic Machine Learning (AutoML), etc.
  • Modin - Speed up your Pandas workflows by changing a single line of code
  • Numba - A compiler for Python array and numerical functions
  • NumpyGroupies Optimised tools for group-indexing operations: aggregated sum and more
  • Vulkan Kompute - Blazing fast, lightweight and mobile phone-enabled Vulkan compute framework optimized for advanced GPU data processing usecases.
  • Weld High-performance runtime for data analytics applications, Interview with Weld’s main contributor

Data Stream Processing

  • Apache Flink - Open source stream processing framework with powerful stream and batch processing capabilities.
  • Apache Samza - Distributed stream processing framework. It uses Apache Kafka for messaging, and Apache Hadoop YARN to provide fault tolerance, processor isolation, security, and resource management.
  • Brooklin - Distributed stream processing framework. It uses Apache Kafka for messaging, and Apache Hadoop YARN to provide fault tolerance, processor isolation, security, and resource management.
  • Faust - Streaming library built on top of Python's Asyncio library using the async kafka client inspired by the kafka streaming library.
  • Kafka Streams - Kafka client library for buliding applications and microservices where the input and output are stored in kafka clusters
  • Spark Streaming - Micro-batch processing for streams using the apache spark framework as a backend supporting stateful exactly-once semantics

Outlier and Anomaly Detection

  • adtk - A Python toolkit for rule-based/unsupervised anomaly detection in time series.
  • Alibi-Detect - Algorithms for outlier and adversarial instance detection, concept drift and metrics.
  • dBoost - Outlier detection in heterogeneous datasets using automatic tuple expansion. Paper.
  • Deequ - A library built on top of Apache Spark for defining "unit tests for data", which measure data quality in large datasets.
  • Deep Anomaly Detection with Outlier Exposure - Outlier Exposure (OE) is a method for improving anomaly detection performance in deep learning models. Paper
  • PyOD - A Python Toolbox for Scalable Outlier Detection (Anomaly Detection).
  • SUOD (Scalable Unsupervised Outlier Detection) - An Acceleration System for Large-scale Outlier Detection (Anomaly Detection)
  • Tensorflow Data Validation (TFDV) - Library for exploring and validating machine learning data.

Feature Engineering Automation

  • auto-sklearn - Framework to automate algorithm and hyperparameter tuning for sklearn
  • AutoGluon - Automated feature, model, and hyperparameter selection for tabular, image, and text data on top of popular machine learning libraries (Scikit-Learn, LightGBM, CatBoost, PyTorch, MXNet)
  • AutoML-GS - Automatic feature and model search with code generation in Python, on top of common data science libraries (tensorflow, sklearn, etc)
  • automl - Automated feature engineering, feature/model selection, hyperparam. optimisation
  • Colombus - A scalable framework to perform exploratory feature selection implemented in R
  • Feature Engine - Feature-engine is a Python library that contains several transformers to engineer features for use in machine learning models.
  • Featuretools - An open source framework for automated feature engineering
  • keras-tuner - Keras Tuner is an easy-to-use, distributable hyperparameter optimization framework that solves the pain points of performing a hyperparameter search. Keras Tuner makes it easy to define a search space and leverage included algorithms to find the best hyperparameter values.
  • sklearn-deap Use evolutionary algorithms instead of gridsearch in scikit-learn.
  • TPOT - Automation of sklearn pipeline creation (including feature selection, pre-processor, etc)
  • tsfresh - Automatic extraction of relevant features from time series
  • mljar-supervised - An Automated Machine Learning (AutoML) python package for tabular data. It can handle: Binary Classification, MultiClass Classification and Regression. It provides feature engineering, explanations and markdown reports.

Feature Stores

  • Butterfree - A tool for building feature stores which allows you to transform your raw data into beautiful features.
  • Feature Store for Machine Learning (FEAST) - Feast (Feature Store) is a tool for managing and serving machine learning features. Feast is the bridge between models and data.
  • Hopsworks Feature Store - Offline/Online Feature Store for ML (Video).
  • Ivory - ivory defines a specification for how to store feature data and provides a set of tools for querying it. It does not provide any tooling for producing feature data in the first place. All ivory commands run as MapReduce jobs so it assumed that feature data is maintained on HDFS.
  • Veri - Veri is a Feature Label Store. Feature Label store allows storing features as keys and labels as values. Querying values is only possible with knn using features. Veri also supports creating sub sample spaces of data by default.

Commercial Platforms

  • Algorithmia - Cloud platform to build, deploy and serve machine learning models (Video)
  • allegro ai Enterprise - Automagical open-source ML & DL experiment manager and ML-Ops solution.
  • Amazon SageMaker - End-to-end machine learning development and deployment interface where you are able to build notebooks that use EC2 instances as backend, and then can host models exposed on an API
  • bigml - E2E machine learning platform.
  • cnvrg.io - An end-to-end platform to manage, build and automate machine learning
  • Comet.ml - Machine learning experiment management. Free for open source and students (Video)
  • Cubonacci - The Cubonacci platform manages deployment, versioning, infrastructure, monitoring and lineage for you, eliminating risk and minimizing time-to-market.
  • D2iQ KUDO for Kubeflow - Enterprise machine learning platform that runs in the cloud, on premises (incl. air-gapped), in hybrid environments, or on the edge; based on Kubeflow and open-source Kubernetes Universal Declarative Operators (KUDO).
  • DAGsHub - Community platform for Open Source ML – Manage experiments, data & models and create collaborative ML projects easily.
  • Dataiku - Collaborative data science platform powering both self-service analytics and the operationalization of machine learning models in production.
  • DataRobot - Automated machine learning platform which enables users to build and deploy machine learning models.
  • Datatron - Machine Learning Model Governance Platform for all your AI models in production for large Enterprises.
  • deepsense AIOps - Enhances multi-cloud & data center IT Operations via traffic analysis, risk analysis, anomaly detection, predictive maintenance, root cause analysis, service ticket analysis and event consolidation.
  • Deep Cognition Deep Learning Studio - E2E platform for deep learning.
  • deepsense Safety - AI-driven solution to increase worksite safety via safety procedure check, thread detection and hazardous zones monitoring.
  • deepsense Quality - Automating laborious quality control tasks.
  • Diffgram - Training Data First platform. Database & Training Data Pipelines for Supervised AI. Integrated with GCP, AWS, Azure and top Annotation Supervision UIs (or use built-in Diffgram UI, or build your own). Plus a growing list of integrated service providers! For Computer Vision, NLP, and Supervised Deep Learning / Machine Learning.
  • Google Cloud Machine Learning Engine - Managed service that enables developers and data scientists to build and bring machine learning models to production.
  • H2O Driverless AI - Automates key machine learning tasks, delivering automatic feature engineering, model validation, model tuning, model selection and deployment, machine learning interpretability, bring your own recipe, time-series and automatic pipeline generation for model scoring. (Video)
  • IBM Watson Machine Learning - Create, train, and deploy self-learning models using an automated, collaborative workflow.
  • Iguazio Data Science Platform - Bring your Data Science to life by automating MLOps with end-to-end machine learning pipelines, transforming AI projects into real-world business outcomes, and supporting real-time performance at enterprise scale.
  • Labelbox - Image labelling service with support for semantic segmentation (brush & superpixels), bounding boxes and nested classifications.
  • Logical Clocks Hopsworks - Enterprise version of Hopsworks with a Feature Store and scale-out ML pipeline design and operation.
  • MCenter - MLOps platform automates the deployment, ongoing optimization, and governance of machine learning applications in production.
  • Microsoft Azure Machine Learning service - Build, train, and deploy models from the cloud to the edge.
  • MLJAR - Platform for rapid prototyping, developing and deploying machine learning models.
  • neptune.ml - community-friendly platform supporting data scientists in creating and sharing machine learning models. Neptune facilitates teamwork, infrastructure management, models comparison and reproducibility.
  • Prodigy - Active learning-based data annotation. Allows to train a model and pick most 'uncertain' samples for labeling from an unlabeled pool.
  • Scribble Enrich - Customizable, auditable, privacy-aware feature store. It is designed to help mid-sized data teams gain trust in the data that they use for training and analysis, and support emerging needs such drift computation and bias assessment.
  • SKIL - Software distribution designed to help enterprise IT teams manage, deploy, and retrain machine learning models at scale.
  • Skytree 16.0 - End to end machine learning platform (Video)
  • Spell - Flexible end-to-end MLOps / Machine Learning Platform. (Video)
  • Superb AI - ML DataOps platform providing various tools to build, label, manage and iterate on training data.
  • Talend Studio
  • Valohai - Machine orchestration, version control and pipeline management for deep learning.

About

A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published