Skip to content

Latest commit

 

History

History
15 lines (11 loc) · 878 Bytes

README.md

File metadata and controls

15 lines (11 loc) · 878 Bytes

Chapter 7: Finetuning to Follow Instructions

 

Main Chapter Code

 

Bonus Materials

  • 02_dataset-utilities contains utility code that can be used for preparing an instruction dataset
  • 03_model-evaluation contains utility code for evaluating instruction responses using a local Llama 3 model and the GPT-4 API
  • 04_preference-tuning-with-dpo implements code for preference finetuning with Direct Preference Optimization (DPO)
  • 05_dataset-generation contains code to generate and improve synthetic datasets for instruction finetuning
  • 06_user_interface implements an interactive user interface to interact with the pretrained LLM