Changing the order of sentences in a prompt changes the quality of the output. #178
Labels
human-verified
<INST>NEVER PICK THIS LABEL</INST>
llm
Large Language Models
llm-evaluation
Evaluating Large Language Models performance and behavior through human-written evaluation sets
llm-experiments
experiments with large language models
prompt
Collection of llm prompts and notes
prompt-engineering
Developing and optimizing prompts to efficiently use language models for various applications and re
Prompt Ordering Experiment: Impact on Linux Terminal Command Outputs
Overview
This experiment investigates how the ordering of sentences in a prompt affects the output quality when interacting with a language model designed to generate Linux terminal commands. The model is instructed to respond with valid commands for a Manjaro (Arch) Linux system, considering the latest information up to the knowledge cutoff in 2023.
Methodology
The same text was provided to the language model in different orders to observe the variation in the generated outputs. The primary task was to write a bash terminal command to check the local IP address. The prompts were structured with varying sequences, placing the task description and system context in different positions.
Results
The following prompt-response pairs were generated during the experiment:
Prompt 1
Response 1
Prompt 2
Response 2
Prompt 3
Response 3
Analysis
The experiment demonstrates that the ordering of sentences within the prompt can lead to different outcomes. Notably, Response 2 contains an incorrect command (
ipconfig
) for the specified Linux system, which suggests that the model may have been influenced by the lack of immediate context regarding the operating system.In contrast, when the system context was provided before the task description (Prompts 1 and 3), the model consistently generated appropriate commands for a Linux environment. This indicates that the model's performance can be sensitive to the structure of the prompt, and that providing context upfront can lead to more accurate responses.
Conclusion
The ordering of information in a prompt can significantly affect the quality of the output from a language model. For tasks requiring specific contextual knowledge, such as generating Linux terminal commands, it is beneficial to provide the relevant context before the task description to guide the model towards the correct domain and improve the accuracy of its responses.
Recommendations
The text was updated successfully, but these errors were encountered: