Skip to content

Latest commit

 

History

History
 
 

03_model-evaluation

Chapter 7: Instruction Finetuning

This folder contains utility code that can be used for model evaluation.

 

Evaluating Instruction Responses Using the OpenAI API

  • The llm-instruction-eval-openai.ipynb notebook uses OpenAI's GPT-4 to evaluate responses generated by instruction finetuned models. It works with a JSON file in the following format:
{
    "instruction": "What is the atomic number of helium?",
    "input": "",
    "output": "The atomic number of helium is 2.",               # <-- The target given in the test set
    "model 1 response": "\nThe atomic number of helium is 2.0.", # <-- Response by an LLM
    "model 2 response": "\nThe atomic number of helium is 3."    # <-- Response by a 2nd LLM
},

 

Evaluating Instruction Responses Locally Using Ollama