- Beijing, China
-
04:35
(UTC +08:00) - https://spencerwoo.com
Highlights
🍃 Machine learning
Aim 💫 — An easy-to-use & supercharged open-source experiment tracker.
Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)
Easily turn large sets of image urls to an image dataset. Can download, resize and package 100M urls in 20h on one machine.
Visualizer for neural network, deep learning and machine learning models
An open-source academic paper management tool.
Dear PyGui: A fast and powerful Graphical User Interface Toolkit for Python with minimal dependencies
Convert Machine Learning Code Between Frameworks
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
⏰ AI conference deadline countdowns
Build and share delightful machine learning apps, all in Python. 🌟 Star to support our work!
MLNLP社区用来更好进行论文搜索的工具。Fully-automated scripts for collecting AI-related papers
⚡VoltaML is a lightweight library to convert and run your ML/DL deep learning models in high performance inference runtimes like TensorRT, TorchScript, ONNX and TVM.
The AI developer platform. Use Weights & Biases to train and fine-tune models, and manage models from experimentation to production.
On-Device Training Under 256KB Memory [NeurIPS'22]
CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
The simplest, fastest repository for training/finetuning medium-sized GPTs.
Easily share permanent links to ChatGPT conversations with your friends
📈 Implementation of eight evaluation metrics to access the similarity between two images. The eight metrics are as follows: RMSE, PSNR, SSIM, ISSM, FSIM, SRE, SAM, and UIQ.
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Unifying Variational Autoencoder (VAE) implementations in Pytorch (NeurIPS 2022)
A Collection of Variational Autoencoders (VAE) in PyTorch.
Pretrain, finetune ANY AI model of ANY size on multiple GPUs, TPUs with zero code changes.
Convert tensorflow model to pytorch model via [MMdnn](https://github.com/microsoft/MMdnn) for adversarial attacks.