This module provides the real time riddle generation service for main ScavengerHunt back-end based on landmark meta-data.
The riddle_generator.png image illustrates the riddle generation process. Below is a brief explanation of each part shown in the image:
- Backend: Retrieves landmark metadata from the backend.
- Build System Prompt: Constructs the system prompt, including story background and task information.
- Build User Prompt: Constructs the user prompt based on user input and landmark information.
- Calculate Riddle Beat: Calculates the riddle beat, determining the rhythm and structure of the riddle.
- Decode Difficulty: Decodes the difficulty index, adjusting the complexity of the riddle.
- User Customization: Allows user customization options, enabling users to adjust the riddle's style and language.
- Compose Story Context: Composes the story context, ensuring the riddle aligns with the overall storyline.
- Generate Riddle: Finally generates the riddle, combining system and user prompts.
Install required Python packages:
pip install -r requirements.txtCore dependencies:
Flask==3.1.1- Web framework for API endpointspymongo==4.13.2- MongoDB client for landmark metadata accesspython-dotenv==1.1.1- Environment variable managementopenai==1.97.1- OpenAI API client (for ChatGPT mode)lmstudio==1.4.1- LM Studio client (for local LLM mode)
Note: The full requirements.txt includes all transitive dependencies for reproducible builds.
Create a .env file in the project root with:
# MongoDB Configuration
MONGO_URL=mongodb://localhost:27017
MONGO_DATABASE=scavengerhunt
MONGO_COLLECTION=landmark_metadata
# OpenAI Configuration (for ChatGPT mode)
OPENAI_API_KEY=your_openai_api_key_here
# LM Studio Configuration (for local mode)
LMSTUDIO_BASE_URL=http://localhost:1234/v1
LMSTUDIO_MODEL=llama-3.2-1b-instruct
# Flask Configuration
FLASK_PORT=5001
FLASK_DEBUG=true
FLASK_HOST=0.0.0.0
# Model Selection (local or chatgpt)
DEFAULT_MODEL=localEnsure that the .env file is correctly configured with your MongoDB, OpenAI, and LM Studio settings. The Flask application will use these settings to connect to the necessary services and run the server.
-
Start MongoDB (if using local instance):
mongod --dbpath /path/to/your/db
-
Start LM Studio (if using local model):
- Launch LM Studio application
- Load
llama-3.2-1b-instructmodel - Start local server on
http://localhost:1234
-
Activate virtual environment:
source .venv/bin/activate -
Start the Flask application:
python app.py
Alternative: Use the startup script (recommended):
chmod +x start.sh ./start.sh
The service runs on port 5001 by default and provides:
POST /generate-riddle- Generate riddles with story continuityPOST /reset-session- Reset session state
curl -X POST http://localhost:5001/generate-riddle \
-H "Content-Type: application/json" \
-d '{
"sessionId": "unique-session-123",
"landmarkId": "686fe2fd5513908b37be306d",
"language": "English",
"style": "Medieval",
"difficulty": 50,
"puzzlePool": ["686fe2fd5513908b37be306d", "686fe2fd5513908b37be306f"]
}'Common Issues:
-
MongoDB Connection Failed
pymongo.errors.ServerSelectionTimeoutError- Ensure MongoDB is running:
brew services start mongodb-community(macOS) orsudo systemctl start mongod(Linux) - Check MongoDB URL in
.envfile - Verify database contains
landmark_metadatacollection
- Ensure MongoDB is running:
-
LM Studio Connection Failed
lmstudio.exceptions.ConnectionError- Start LM Studio application
- Load the
llama-3.2-1b-instructmodel - Enable local server in LM Studio settings
-
OpenAI API Errors
openai.error.AuthenticationError- Verify
OPENAI_API_KEYin.envfile - Check API key permissions and billing status
- Verify
-
Port Already in Use
OSError: [Errno 48] Address already in use- Change port in
app.py:app.run(port=5002) - Kill existing process:
lsof -ti:5001 | xargs kill -9
- Change port in
-
Missing Dependencies
ModuleNotFoundError: No module named 'flask'- Activate virtual environment:
source .venv/bin/activate - Install dependencies:
pip install -r requirements.txt
- Activate virtual environment:
-
Implemented a standalone
RiddleGeneratorclass that loads metadata from thelandmark_metadataMongoDB collection and generates riddles using a localllama-3.2-1b-instructmodel. -
The class accesses the following fields from the landmark entry:
namemeta.description.historymeta.description.architecturemeta.description.significance
-
These fields are used to construct a user prompt formatted as bullet-pointed sections.
-
A custom system prompt is generated based on the riddle style (e.g. medieval) and language (default: English), following the LM Studio chat template format.
# Template-based prompt generation
template = """<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system}
<|eot_id|><|start_header_id|>user<|end_header_id|>
{user}
<|eot_id|><|start_header_id|>assistant<|end_header_id|>"""
system_prompt = f"""
You are a master riddle writer. Writing only riddles for landmark in following format with no extra information nor specifying landmark name.
\\begin{{quote}}
Written in {language}
Create a {style} riddle based on the information about {self.meta["name"]}. Use the following details as context:
\\textbf{{History}}: ...
\\textbf{{Architecture}}: ...
\\textbf{{Significance}}: ...
\\textbf{{Length}}: No more than 5 lines.
The riddle should be concise, engaging, and reflect a {style} tone.
\\end{{quote}}
"""- The generated riddle is stored in the instance variable
.riddleand returned to the calling layer.
- Created
app.pyas a lightweight HTTP wrapper around theRiddleGeneratorclass. - Exposed a single POST route
/generate-riddle, which:- Accepts a JSON payload containing
landmarkId - Loads metadata from MongoDB
- Runs the local LLM model to produce a riddle
- Returns a JSON response with the generated riddle
- Accepts a JSON payload containing
@app.route("/generate-riddle", methods=["POST"])
def generate_riddle():
data = request.get_json()
lm_id = data.get("landmarkId")
generator = RiddleGenerator()
generator.loadMetaFromDB(lm_id).generateRiddle()
return jsonify({
"status": "ok",
"riddle": generator.riddle
})-
Startup instructions:
source .venv/bin/activate python app.py
The riddle API produces a JSON object suitable for direct consumption by frontend or game clients:
{
"status": "ok",
"riddle": "Through storms I stood with graceful art,\nThree floors deep I hold your heart..."
}- If the
landmarkIdis missing or not found, appropriate HTTP status codes (400 / 404) can be returned (recommended for future versions). - If
descriptionmetadata is empty or missing, the response defaults to:"No riddle generated"
This module introduces a clean separation of concerns:
- Metadata access is encapsulated in
loadMetaFromDB() - Prompt formatting and content construction are embedded in
generateRiddle() - LLM integration is abstracted behind
lmstudio.llm(...) - Microservice interface via Flask enables easy orchestration from Java or frontend
This architecture supports future enhancements such as multilingual riddles, user-personalized difficulty, or caching of outputs.
-
Introduced two foundational classes:
EpistemicStateManager: models player knowledge by tracking solved landmark IDs and extracting semantic themes.EpistemicPlanner: evaluates the novelty of a landmark by comparing its metadata to the player's known types and topics.
-
The planner returns a JSON object indicating:
difficulty: (easy / medium / hard)novelty_score: continuous value between 0 and 1- Optional
hintfor unfamiliar keywords
-
Current logic uses keyword overlap between landmark metadata and player history as a proxy for familiarity.
-
Discussed with supervisor the potential to integrate an ELO-based difficulty control system, where:
- Each landmark is assigned a difficulty rating.
- EpistemicPlanner can combine player state and ELO gap to estimate challenge level.
- Landmark difficulty should be explicitly stored or inferred dynamically.
- Epistemic reasoning can then adaptively personalize riddles based on player proficiency and landmark complexity.
- Created initial prototype in
elo_calculator_demo.pyto simulate adaptive scoring based on player-landmark interaction.
This module serves as a testing sandbox for game balancing and numerical behavior visualization. The final production version will be fully integrated into the main Spring Boot application as part of the core game logic.
-
Implemented key components:
-
calculateElo(player, landmark, minutes_used, correct): updates rating for both player and landmark using modified ELO logic based on High-Speed High-Stakes (HSHS) scoring. -
_dynamicK(player, landmark): computes K-factors using Glickman-style uncertainty terms (U_player,U_landmark). -
_updateUncertainty(current_U, days_since_last_play): increases uncertainty over time; stabilizes with repeated play. -
_hshs(...): defines time-sensitive scoring function incorporating response correctness and time efficiency; computes expected score analytically using discrimination-adjusted logistic model.
-
-
Notes:
- Initial ratings default to 10; uncertainty defaults to 0.5.
- All values are clamped to [0,1] for stability and interpretability.
- Current implementation serves as testbed; not yet connected to main session state or database.
-
Planned integration:
PuzzleManager: to support difficulty-based target selection.EpistemicPlanner: to leverage uncertainty in knowledge tracking and goal planning.
Objective: Enable the generator to accept story_context (including beat_tag and previous_riddles) and use it in the prompt.
Current State:
generateRiddle()already supportsstory_contextparameter:
def generateRiddle(self, language="English", style="medieval", difficulty=50, story_context=None):_generateSystemPrompt()has been added:
if isinstance(story_context, str) and story_context.strip():
context_prompt = f" following the ongoing story context: {story_context.strip()}"
else:
context_prompt = "."- backward-compatible: when
story_contextis None, follows original logic.
Next Action:
- Enhance the prompt guidance of story_context (explicitly require continuation of previous text and plot progression).
Objective: Manage the multi-riddle story progress of a game session, maintain context internally, and keep the API as a single /generate-riddle endpoint.
Current State:
- Memory structure:
self.sessions[session_id] = {
"total_slots": len(puzzle_pool),
"slot_index": 0,
"beat_plan": [...],
"riddle_history": [],
"puzzle_pool": puzzle_pool
}-
start_episode():- If
session_iddoes not exist → create new - If exists → reuse
- Return
session_id
- If
-
serve_riddle():- Get current beat_tag and previous_riddles
- Generate story_context (natural language)
- Call
RiddleGenerator.generateRiddle() - Save to riddle_history, slot_index++
Objective: Complete new game & continuation logic with a single /generate-riddle interface.
Current State:
@app.route("/generate-riddle", methods=["POST"])
def generate_riddle():
session_id = story_weaver.start_episode(puzzle_pool, session_id_from_client)
riddle_info = story_weaver.serve_riddle(language, style, difficulty, session_id)
return jsonify({
"status": "ok",
"session_id": session_id,
"riddle": riddle_info["riddle"]
})- First call passes
puzzle_pool(can contain multiple landmarkIds), returns first riddle +session_id - Subsequent calls only pass
session_id→ StoryWeaver automatically generates next riddle
Test Sequence with 3 Landmark IDs:
686fe2fd5513908b37be306d → Glucksman Gallery
686fe2fd5513908b37be306f → Honan Collegiate Chapel
686fe2fd5513908b37be3071 → The Quad / Aula Maxima
Beat Tag: opening Riddle:
In a land where scholars tread, a fortress of art stands proud and wed,
By architects' hands, it claimed its fame, with awards and honors to its name.
Thrice-leveled tower, cloaked in praise, where modern muses spend their days.
Baptized by a president, its walls have seen, floods and closures, yet its spirit's keen.
What is this beacon by the college green?
Answer: Glucksman Gallery
Beat Tag: development Riddle:
In a land where saints and scholars once did dwell,
A chapel stands, her tales of faith to tell.
With walls enrobed in Celtic art's embrace,
Her glass stained windows saints in light do trace.
Built not from tithe but from a maiden's grace.
Answer: Honan Collegiate Chapel
Beat Tag: ending Riddle:
In a realm where wisdom's flame is fed,
A Gothic guardian stands, its spires point high;
Limestone bones clad in academic pride,
Where scholars gather, and young minds are led.
Behold, where history and learning wed.
Answer: The Quad / Aula Maxima
Observations
-
The sequence of three riddles is correct, but the story context connection is not strong
-
Need to enhance prompt guidance for "continuing previous text and advancing plot"
-
Future additions to story_context could include:
- Global motif
- Narrative supplementary explanations corresponding to beat_tag
- Important elements from previous riddle (link_to)
Code Changes
-
Added new
story_contextconstruction logic inserve_riddle(), including:- Global story seed (
state['story_seed']) - Narrative function descriptions corresponding to beat_tag (opening / development / ending)
- Explicit requirement to "use current landmark's characteristics for completely new descriptions" to avoid repetitive wording
- Retain
Previous riddlessummary for context continuation
- Global story seed (
-
In
_generateSystemPrompt()directly use the passedstory_context, remove redundantcontext_promptvariable, ensure narrative arc information is maintained in one centralized place -
This way the first riddle will introduce the main quest, subsequent riddles will reference and advance the quest plot, while maintaining fresh wording and imagery
Beat Tag: opening Riddle:
In a land where saints and scholars once did dwell,
A chapel stands, her tales of faith to tell.
With walls enrobed in Celtic art's embrace,
Her glass stained windows saints in light do trace.
Built not from tithe but from a maiden's grace.
Answer: Glucksman Gallery
Beat Tag: development Riddle:
Beneath the gaze of saints in glass arrayed,
Where Celtic knots and Romanesque embrace,
The mosaic floor, with zodiac displayed,
Holds the next clue, a celestial trace.
Seek where the sun meets stars in morning's grace.
Answer: Honan Collegiate Chapel
Beat Tag: ending Riddle:
In spires that pierce the sky with Gothic grace,
Where limestone towers guard learned lore's place,
Beneath arches pointed as scholars' thought,
Here lies the relic, by keen minds long sought.
In halls of echoes, past whispers the truth.
Answer: The Quad / Aula Maxima
Observations
The sequence of three riddles matches beat_tag, plot structure follows opening → development → ending rhythm
Continuity improved compared to previous: all three riddles revolve around the same type of background (academic, artistic, religious) and maintain a consistent story atmosphere
Diversified wording, no rigid repetition of previous riddle's phrasing, but core imagery (knowledge, faith, art) is preserved
Can be further optimized:
In opening, directly embed "quest motivation" or "core artifact" to add suspense
In development, introduce plot twists or unexpected clues to increase tension
In ending, clearly reveal quest conclusion to enhance sense of completion
