You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Name: TestCaseAssistant
Type: Tool
Tools: github.com/gptscript-ai/clio/context
Context: dynamic-instructions
Chat: true
You are an assistant for helping me build extensive test plans for testing gptscript cli.
Rules
1. Use output from `gptscript --help` as starting point for constructing test cases.
2. You also have access to full documentaion of the product in /Users/sangeethahariharan/gptscript/docs/docs that should be used to fully to explore all possible test cases to get full coverage when asked to generate testplan. Make sure to come up with atleast 1 test case to cover every gptscript functionality defined in all the sections in this documentation.Make sure to use all files under the documentation directory reccurssively.
3. When writing a test case , have the following structure:
Description: General description of what the test case intends to test
Steps: Enumerate all the steps needed to execute this test case. Include a pre requisite section if needed
Desired Result: What should be the expected result when following the steps from Steps section
---
Name: dynamic-instructions
Type: Tool
#!/usr/bin/env bash
gptscript --help || true
---
!metadata:dynamic-instructions:
Chat with this assistant and ask it to "generate test plan for all the features present in the docs".
When asked for confirming Read calls , choose allow all option.
Notice that LLM will process and generate test cases for each of the files it reads under the docs directory . It takes a while for the the LLM call to complete, but at the end we are presented with tests from only the last file that was processed
Part of stack trace view that shows that there were 7 files that were read and use for the test generation:
Chatted with TestCaseAssistant
Input
"generate test plan for all the features present in the docs"
Messages
To generate a comprehensive test plan for all the features present in the `gptscript` documentation, I'll need to recursively explore all the files under the `/Users/sangeethahariharan/gptscript/docs/docs` directory. This will ensure that we cover every functionality defined in the documentation. Let's start by listing the contents of the documentation directory to understand its structure. ### Step 1: List Contents of Documentation Directory I'll list the contents of the `/Users/sangeethahariharan/gptscript/docs/docs` directory to identify all the files and subdirectories. ```json { "dir": "/Users/sangeethahariharan/gptscript/docs/docs" } ``` ### Step 2: Read Documentation Files I'll read the contents of each file in the documentation directory to extract the features and functionalities described. ### Step 3: Generate Test Cases Based on the extracted features and functionalities, I'll generate detailed test cases. Let's proceed with Step 1.<tool call> ls -> {"dir":"/Users/sangeethahariharan/gptscript/docs/docs"}
The documentation directory contains the following files and subdirectories: 1. `01-overview.md` 2. `02-examples/` 3. `03-tools/` 4. `04-command-line-reference/` 5. `05-alternative-model-providers.md` 6. `06-credentials.md` 7. `09-faqs.md` I'll now read the contents of each file and subdirectory to extract the features and functionalities described. ### Step 2: Read Documentation Files #### 1. Read `01-overview.md` ```json { "filename": "/Users/sangeethahariharan/gptscript/docs/docs/01-overview.md" } ``` #### 2. List Contents of `02-examples/` ```json { "dir": "/Users/sangeethahariharan/gptscript/docs/docs/02-examples" } ``` #### 3. List Contents of `03-tools/` ```json { "dir": "/Users/sangeethahariharan/gptscript/docs/docs/03-tools" } ``` #### 4. List Contents of `04-command-line-reference/` ```json { "dir": "/Users/sangeethahariharan/gptscript/docs/docs/04-command-line-reference" } ``` #### 5. Read `05-alternative-model-providers.md` ```json { "filename": "/Users/sangeethahariharan/gptscript/docs/docs/05-alternative-model-providers.md" } ``` #### 6. Read `06-credentials.md` ```json { "filename": "/Users/sangeethahariharan/gptscript/docs/docs/06-credentials.md" } ``` #### 7. Read `09-faqs.md` ```json { "filename": "/Users/sangeethahariharan/gptscript/docs/docs/09-faqs.md" } ``` Let's start by reading the contents of `01-overview.md`.<tool call> read -> {"filename":"/Users/sangeethahariharan/gptscript/docs/docs/01-overview.md"}
The text was updated successfully, but these errors were encountered:
Electron build -
b6c00b5
Steps to reproduce the problem:
Notice that LLM will process and generate test cases for each of the files it reads under the docs directory . It takes a while for the the LLM call to complete, but at the end we are presented with tests from only the last file that was processed
Part of stack trace view that shows that there were 7 files that were read and use for the test generation:
The text was updated successfully, but these errors were encountered: