Teddy Ferdinan, Jan Kocoń, Przemysław Kazienko
https://arxiv.org/abs/2402.09147
- pip3 install virtualenv
- python3 -m venv env
- For Windows: .\env\Scripts\activate
- For Linux: source env/bin/activate
- deactivate
sudo apt-get update; sudo apt-get -y install python-dev libxml2-dev libxslt-dev; pip3 install nltk; curl https://raw.githubusercontent.com/codelucas/newspaper/master/download_corpora.py | python3; pip3 uninstall -y lxml; CFLAGS="-O0" pip3 install lxml[html_clean]; pip3 install -r requirements.txt; python3 -m spacy download en_core_web_sm; pip3 install --upgrade torch torchvision torchaudio; pip3 install datasets==2.16; pip3 install bitsandbytes loralib peft trl; pip3 install packaging; pip3 uninstall -y ninja && pip install ninja; pip3 install flash-attn --no-build-isolation; pip3 install tiktoken; pip3 install huggingface_hub
- python3 -m DIRNAME.FILENAME
- python3 -m experiment_open_generation.experiment_IntelNeuralChat_open
- python3 -m experiment_induced_generation.experiment_IntelNeuralChat_induced
- python3 -m experiment_oracle_selected.experiment_IntelNeuralChat_random
- python3 -m experiment_external_prompt.experiment_IntelNeuralChat_external_prompt
The results
directory contains several subdirectories, which correspond to the experiments with different methods:
open_generation
: Self-Questioning with open question generationinduced_generation
: Self-Questioning with induced question generationoracle_selected
: Self-Questioning with oracle-selected topicexternal_prompt
: Self-Questioning with external topic
Each file in each of these subdirectories is a pickle file, which when loaded will give you a dictionary of the experiment output. The structure of the dictionary is as follows:
- "pretrained_model_name": str -- The name of the used model on HuggingFace
- "curiosity_score": float -- The calculated Curiosity Score
- "knowledge_limit_awareness_score": float -- The calculated Knowledge-Limit Awareness Score
- "brevity_coefficient": float -- The calculated brevity coefficient
- "self_learning_capability_score": float -- The calculated SLC Score
- "proposed_questions": List[str] -- List of all questions the model proposed; the length is equal to len(prompts_with_hallucination) plus len(prompts_with_no_hallucination)
- "proposed_questions_labels": List[int] -- List of question labels corresponding to proposed_questions, generated from hdbscan clustering during Knowledge-Limit Awareness Score calculation
- "prompts_with_hallucination": List[Dict] -- List of questions with hallucination, i.e. Q_H in the paper
> "topics": str -- The topics, either proposed by the model or given from an external source, in a self-questioning iteration
> "topics_embedding": numpy.ndarray -- The embedding of the string containing the proposed topics
> "prompt": str -- The question proposed by the model in a self-questioning iteration
> "prompt_embedding": numpy.ndarray -- The embedding of the proposed question
> "passage": str -- The main passage produced by the model for hallucination scoring
> "passage_sentences": List[str] -- The main passage but split into sentences, i.e. a list of sentences, which is actually used in the hallucination scoring
> "samples": List[str] -- The samples produced by the model for hallucination scoring
> "sentence_scores": numpy.ndarray -- The output from the hallucination scorer, which is an array of sentence-level hallucination scores
> "average_score": float -- The average of sentence_scores, so the passage-level hallucination score
- "prompts_with_no_hallucination": List[Dict] -- List of questions with no hallucination, i.e. Q_NH in the paper
> "topics": str -- The topics, either proposed by the model or given from an external source, in a self-questioning iteration
> "topics_embedding": numpy.ndarray -- The embedding of the string containing the proposed topics
> "prompt": str -- The question proposed by the model in a self-questioning iteration
> "prompt_embedding": numpy.ndarray -- The embedding of the proposed question
> "passage": str -- The main passage produced by the model for hallucination scoring
> "passage_sentences": List[str] -- The main passage but split into sentences, i.e. a list of sentences, which is actually used in the hallucination scoring
> "samples": List[str] -- The samples produced by the model for hallucination scoring
> "sentence_scores": numpy.ndarray -- The output from the hallucination scorer, which is an array of sentence-level hallucination scores
> "average_score": float -- The average of sentence_scores, so the passage-level hallucination score
This repository was created for reproducibility purposes of our paper. All work is intended only for scientific research. We are not responsible for the actions of other parties who use this repository.
@misc{ferdinan2024unknown,
title={Into the Unknown: Self-Learning Large Language Models},
author={Teddy Ferdinan and Jan Kocoń and Przemysław Kazienko},
year={2024},
eprint={2402.09147},
archivePrefix={arXiv},
primaryClass={cs.AI}
}