This is an experiment to work with LLM (Large Language Models) to create a language tutor and assistant that aids in language learning. It's an initiative by the team at Composable Prompts as part of our ongoing research into programming models for LLMs.
We've utilized a large language model (specifically GPT-4) and adopted a multi-agent/multi-task approach to implement features and behaviors. Structured data I/O facilitates the integration of the LLM's processing capabilities into the app. The intention is to delegate deterministic tasks to the application's business logic while leaning on the LLM for undeterministic tasks. This is achieved by translating results from the LLM into the applications through a structured schema. States are crafted using Interactions, which consist of parametrized Prompt Segments and Input Data—essentially, data schema responses from the LLM.
The primary objective was to explore potential programming models for LLM-driven learning systems. The applications span general education (like math and science) to corporate training and knowledge maintenance.
Here are the core features of the application:
Engage in readings and writings that mimic a conversational style with the tutor, which is powered by the LLM. This is beneficial for practicing real-life situations or for simple writing practice.
-
Converse on any topic in real-time.
-
Live Checking: Each user message undergoes a background check. Indicators highlight correctness, and users can click to view explanations.
-
Live Dictionary: Click on words to receive instant definitions, generated by the tutor and presented in the user's language.
-
Explanations: The Tutor can deconstruct messages, aiding comprehension.
System and safety prompts ensure the LLM stays aligned and retains context. Tinkering with these prompts can alter engagement and entertainment levels.
Generate diverse content, from stories to practical procedures, leveraging dynamic suggestions by the Tutor or user specifications.
The available choices are dynamic, shaped by the LLM based on certain seeds in the prompts (which could be derived from a user's interests or learning goals).
- Contextual Explanations & Dictionary: Seek explanations for any paragraph or word to aid comprehension.
- Questions & Answers: Request content-specific questions, tailored by the Tutor to the user's proficiency.
- Dynamic Corrections: Receive feedback on your answers and gain insights into your performance.
When learning a new language, it's invaluable to verify sentences before use and to delve deeper than mere translations. The Explain & Verify feature serves this purpose.
- Verify: Assess content for naturalness, tone, level, and potential refinements.
- Explain: Understand content intricacies, get insights on language proficiency, and suggestions on responses or continuity.
To test, deploy this repository on platforms like Vercel, CloudRun, or Atlas (our recommended production setup).
Or, give it a try here (login via any Google Account).
At Dengenlabs, our focus is on advancing Composable Prompts. This app serves as a testament to the possibilities of programming models for LLMs.
We aim to migrate the app to Composable Prompts soon, demonstrating its potential to supercharge apps with LLMs.
We've also outlined future exploration areas (contributions are welcome!):
- Log words users engage with in a timeseries, identifying areas for practice and creating an LLM feedback loop.
- Integrate images into stories and conversations for a richer experience.
- Detect user proficiency based on their writings to monitor and showcase progress.
- Allow the Tutor to dynamically curate a learning course tailored to user preferences and expertise.
Do share your suggestions or potential enhancements by raising issues.
Reach out to us at contact@composableprompts.com. We're always eager to engage and discuss!