This repository is a collection of resources and information related to advancements in Natural Language Processing (NLP), with a focus on Large Language Models (LLMs) and parameter-efficient finetuning techniques.
- Yann Dubois: Scalable Evaluation of Large Language Models - video
- Stanford CS229 I Machine Learning I Building Large Language Models - video
This section covers the latest developments in optimizing the finetuning process of large language models, aiming to reduce the number of parameters that need to be updated during the training process.
- Understanding Parameter-Efficient Finetuning of Large Language Models: From Prefix Tuning to LLaMA-Adapters
- Prefix-Tuning: Optimizing Continuous Prompts for Generation
- Parameter-Efficient LLM Finetuning With Low-Rank Adaptation (LoRA)
- Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey
- Finetuning LLMs with LoRA and QLoRA: Insights from Hundreds of Experiments
This section focuses on the integration of semantic search and retrieval capabilities into the LLM generation process, known as Retrieval Augmented Generation (RAG).
- langchain-ai/rag-from-scratch
- Tutorial: Building your own retrieval-augmented generation system - llms deep dive
- Hands-On-Large-Language-Models - Chapter 8 - Semantic Search and Retrieval-Augmented Generation
- Building an LLM open source search engine in 100 lines using LangChain and Ray
- Awesome Agents
- Awesome Large Multimodal Agents
- Large Multimodal Agents: A Survey
- Agents : Build real-time multimodal AI applications
This section links to various code repositories and Jupyter Notebooks that provide practical implementations and examples of the techniques discussed in this README.