-
Kyoto University
- Japan
- https://speed1313.github.io
- @strayer_13
Highlights
- Pro
Lists (8)
Sort Name ascending (A-Z)
Stars
TensorFlow code and pre-trained models for BERT
Easily turn large English text datasets into Japanese text datasets using open LLMs.
Pytorch library for fast transformer implementations
The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery π§βπ¬
A PyTorch native library for large model training
A resource repository for machine unlearning in large language models
aiXplain enables python programmers to add AI functions to their software.
Easily turn large sets of image urls to an image dataset. Can download, resize and package 100M urls in 20h on one machine.
A high-throughput and memory-efficient inference and serving engine for LLMs
γ2024εΉ΄ηγBERTγ«γγγγγΉγει‘
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
An implementation of "Subspace Representations for Soft Set Operations and Sentence Similarities" (NAACL 2024)
Fast and memory-efficient exact attention
Really Fast End-to-End Jax RL Implementations
Clean single-file implementation of offline RL algorithms in JAX
A simple, performant and scalable Jax LLM!
Fully documented Pytorch implementation of the Equilibrium Propagation algorithm.
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
Deep Reinforcement Learning: Zero to Hero!