diff --git a/skills/architecture/ABOUT.md b/skills/architecture/ABOUT.md new file mode 100644 index 000000000..865d72872 --- /dev/null +++ b/skills/architecture/ABOUT.md @@ -0,0 +1,20 @@ +# Architecture Skills - Attribution + +This skill was derived from agent patterns in the [Amplifier](https://github.com/microsoft/amplifier) project. + +**Source Repository:** +- Name: Amplifier +- URL: https://github.com/microsoft/amplifier +- Commit: 2adb63f858e7d760e188197c8e8d4c1ef721e2a6 +- Date: 2025-10-10 + +## Skills Derived from Amplifier Agents + +**From ambiguity-guardian agent:** +- preserving-productive-tensions - Recognizing when disagreements reveal valuable context, preserving multiple valid approaches instead of forcing premature resolution + +## What Was Adapted + +The ambiguity-guardian agent preserves productive contradictions and navigates uncertainty as valuable features of knowledge. This skill extracts the core pattern-recognition capability: distinguishing when tensions should be preserved (context-dependent trade-offs) vs resolved (clear technical superiority). + +Adapted as scannable guide with symptom-based triggers ("going back and forth", "keep changing mind") and practical preservation patterns (configuration, parallel implementations, documented trade-offs). diff --git a/skills/architecture/preserving-productive-tensions/SKILL.md b/skills/architecture/preserving-productive-tensions/SKILL.md new file mode 100644 index 000000000..3b189b13e --- /dev/null +++ b/skills/architecture/preserving-productive-tensions/SKILL.md @@ -0,0 +1,152 @@ +--- +name: Preserving Productive Tensions +description: Recognize when disagreements reveal valuable context, preserve multiple valid approaches instead of forcing premature resolution +when_to_use: Going back and forth between options. Both approaches seem equally good. Keep changing your mind. About to ask "which is better?" but sense both optimize for different things. Stakeholders want conflicting things (both valid). +version: 1.0.0 +--- + +# Preserving Productive Tensions + +## Overview + +Some tensions aren't problems to solve - they're valuable information to preserve. When multiple approaches are genuinely valid in different contexts, forcing a choice destroys flexibility. + +**Core principle:** Preserve tensions that reveal context-dependence. Force resolution only when necessary. + +## Recognizing Productive Tensions + +**A tension is productive when:** +- Both approaches optimize for different valid priorities (cost vs latency, simplicity vs features) +- The "better" choice depends on deployment context, not technical superiority +- Different users/deployments would choose differently +- The trade-off is real and won't disappear with clever engineering +- Stakeholders have conflicting valid concerns + +**A tension needs resolution when:** +- Implementation cost of preserving both is prohibitive +- The approaches fundamentally conflict (can't coexist) +- There's clear technical superiority for this specific use case +- It's a one-way door (choice locks architecture) +- Preserving both adds complexity without value + +## Preservation Patterns + +### Pattern 1: Configuration +Make the choice configurable rather than baked into architecture: + +```python +class Config: + mode: Literal["optimize_cost", "optimize_latency"] + # Each mode gets clean, simple implementation +``` + +**When to use:** Both approaches are architecturally compatible, switching is runtime decision + +### Pattern 2: Parallel Implementations +Maintain both as separate clean modules with shared contract: + +```python +# processor/batch.py - optimizes for cost +# processor/stream.py - optimizes for latency +# Both implement: def process(data) -> Result +``` + +**When to use:** Approaches diverge significantly, but share same interface + +### Pattern 3: Documented Trade-off +Capture the tension explicitly in documentation/decision records: + +```markdown +## Unresolved Tension: Authentication Strategy + +**Option A: JWT** - Stateless, scales easily, but token revocation is hard +**Option B: Sessions** - Easy revocation, but requires shared state + +**Why unresolved:** Different deployments need different trade-offs +**Decision deferred to:** Deployment configuration +**Review trigger:** If 80% of deployments choose one option +``` + +**When to use:** Can't preserve both in code, but need to document the choice was deliberate + +## Red Flags - You're Forcing Resolution + +- Asking "which is best?" when both are valid +- "We need to pick one" without explaining why +- Choosing based on your preference vs user context +- Resolving tensions to "make progress" when preserving them IS progress +- Forcing consensus when diversity is valuable + +**All of these mean: STOP. Consider preserving the tension.** + +## When to Force Resolution + +**You SHOULD force resolution when:** + +1. **Implementation cost is prohibitive** + - Building/maintaining both would slow development significantly + - Team doesn't have bandwidth for parallel approaches + +2. **Fundamental conflict** + - Approaches make contradictory architectural assumptions + - Can't cleanly separate concerns + +3. **Clear technical superiority** + - One approach is objectively better for this specific context + - Not "I prefer X" but "X solves our constraints, Y doesn't" + +4. **One-way door** + - Choice locks us into an architecture + - Migration between options would be expensive + +5. **Simplicity requires choice** + - Preserving both genuinely adds complexity + - YAGNI: Don't build both if we only need one + +**Ask explicitly:** "Should I pick one, or preserve both as options?" + +## Documentation Format + +When preserving tensions, document clearly: + +```markdown +## Tension: [Name] + +**Context:** [Why this tension exists] + +**Option A:** [Approach] +- Optimizes for: [Priority] +- Trade-off: [Cost] +- Best when: [Context] + +**Option B:** [Approach] +- Optimizes for: [Different priority] +- Trade-off: [Different cost] +- Best when: [Different context] + +**Preservation strategy:** [Configuration/Parallel/Documented] + +**Resolution trigger:** [Conditions that would force choosing one] +``` + +## Examples + +### Productive Tension (Preserve) +"Should we optimize for cost or latency?" +- **Answer:** Make it configurable - different deployments need different trade-offs + +### Technical Decision (Resolve) +"Should we use SSE or WebSockets?" +- **Answer:** SSE - we only need one-way communication, simpler implementation + +### Business Decision (Defer) +"Should we support offline mode?" +- **Answer:** Don't preserve both - ask stakeholder to decide based on user needs + +## Remember + +- Tensions between valid priorities are features, not bugs +- Premature consensus destroys valuable flexibility +- Configuration > forced choice (when reasonable) +- Document trade-offs explicitly +- Resolution is okay when justified diff --git a/skills/collaboration/brainstorming/SKILL.md b/skills/collaboration/brainstorming/SKILL.md index 4a9f6ca5e..1b75c381a 100644 --- a/skills/collaboration/brainstorming/SKILL.md +++ b/skills/collaboration/brainstorming/SKILL.md @@ -2,7 +2,7 @@ name: Brainstorming Ideas Into Designs description: Interactive idea refinement using Socratic method to develop fully-formed designs when_to_use: When your human partner says "I've got an idea", "Let's make/build/create", "I want to implement/add", "What if we". When starting design for complex feature. Before writing implementation plans. When idea needs refinement and exploration. ACTIVATE THIS AUTOMATICALLY when your human partner describes a feature or project idea - don't wait for /brainstorm command. -version: 2.0.0 +version: 2.1.0 --- # Brainstorming Ideas Into Designs @@ -24,7 +24,7 @@ Transform rough ideas into fully-formed designs through structured questioning a - Gather: Purpose, constraints, success criteria ### Phase 2: Exploration -- Propose 2-3 different approaches (reference skills/coding/exploring-alternatives) +- Propose 2-3 different approaches - For each: Core architecture, trade-offs, complexity assessment - Ask your human partner which approach resonates @@ -48,9 +48,28 @@ When your human partner confirms (any affirmative response): - Switch to skills/collaboration/writing-plans skill - Create detailed plan in the worktree +## When to Revisit Earlier Phases + +**You can and should go backward when:** +- Partner reveals new constraint during Phase 2 or 3 → Return to Phase 1 to understand it +- Validation shows fundamental gap in requirements → Return to Phase 1 +- Partner questions approach during Phase 3 → Return to Phase 2 to explore alternatives +- Something doesn't make sense → Go back and clarify + +**Don't force forward linearly** when going backward would give better results. + +## Related Skills + +**During exploration:** +- When approaches have genuine trade-offs: skills/architecture/preserving-productive-tensions + +**Before proposing changes to existing code:** +- Understand why it exists: skills/research/tracing-knowledge-lineages + ## Remember - One question per message during Phase 1 -- Apply YAGNI ruthlessly (reference skills/architecture/reducing-complexity) +- Apply YAGNI ruthlessly - Explore 2-3 alternatives before settling - Present incrementally, validate as you go +- Go backward when needed - flexibility > rigid progression - Announce skill usage at start diff --git a/skills/collaboration/executing-plans/SKILL.md b/skills/collaboration/executing-plans/SKILL.md index 0aae0ea6e..a2f96e005 100644 --- a/skills/collaboration/executing-plans/SKILL.md +++ b/skills/collaboration/executing-plans/SKILL.md @@ -2,7 +2,7 @@ name: Executing Plans description: Execute detailed plans in batches with review checkpoints when_to_use: When have a complete implementation plan to execute. When implementing in separate session from planning. When your human partner points you to a plan file to implement. -version: 2.0.0 +version: 2.1.0 --- # Executing Plans @@ -51,9 +51,28 @@ After all tasks complete and verified: - Switch to skills/collaboration/finishing-a-development-branch - Follow that skill to verify tests, present options, execute choice +## When to Stop and Ask for Help + +**STOP executing immediately when:** +- Hit a blocker mid-batch (missing dependency, test fails, instruction unclear) +- Plan has critical gaps preventing starting +- You don't understand an instruction +- Verification fails repeatedly + +**Ask for clarification rather than guessing.** + +## When to Revisit Earlier Steps + +**Return to Review (Step 1) when:** +- Partner updates the plan based on your feedback +- Fundamental approach needs rethinking + +**Don't force through blockers** - stop and ask. + ## Remember - Review plan critically first - Follow plan steps exactly - Don't skip verifications - Reference skills when plan says to - Between batches: just report and wait +- Stop when blocked, don't guess diff --git a/skills/meta/creating-skills/.SKILL.md.swp b/skills/meta/creating-skills/.SKILL.md.swp new file mode 100644 index 000000000..ed77d6d48 Binary files /dev/null and b/skills/meta/creating-skills/.SKILL.md.swp differ diff --git a/skills/problem-solving/ABOUT.md b/skills/problem-solving/ABOUT.md new file mode 100644 index 000000000..fc8a3e34b --- /dev/null +++ b/skills/problem-solving/ABOUT.md @@ -0,0 +1,40 @@ +# Problem-Solving Skills - Attribution + +These skills were derived from agent patterns in the [Amplifier](https://github.com/microsoft/amplifier) project. + +**Source Repository:** +- Name: Amplifier +- URL: https://github.com/microsoft/amplifier +- Commit: 2adb63f858e7d760e188197c8e8d4c1ef721e2a6 +- Date: 2025-10-10 + +## Skills Derived from Amplifier Agents + +**From insight-synthesizer agent:** +- simplification-cascades - Finding insights that eliminate multiple components +- collision-zone-thinking - Forcing unrelated concepts together for breakthroughs +- meta-pattern-recognition - Spotting patterns across 3+ domains +- inversion-exercise - Flipping assumptions to reveal alternatives +- scale-game - Testing at extremes to expose fundamental truths + +**From ambiguity-guardian agent:** +- (architecture) preserving-productive-tensions - Preserving multiple valid approaches + +**From knowledge-archaeologist agent:** +- (research) tracing-knowledge-lineages - Understanding how ideas evolved + +**Dispatch pattern:** +- when-stuck - Maps stuck-symptoms to appropriate technique + +## What Was Adapted + +The amplifier agents are specialized long-lived agents with structured JSON output. These skills extract the core problem-solving techniques and adapt them as: + +- Scannable quick-reference guides (~60 lines each) +- Symptom-based discovery via when_to_use +- Immediate application without special tooling +- Composable through dispatch pattern + +## Core Insight + +Agent capabilities are domain-agnostic patterns. Whether packaged as "amplifier agent" or "superpowers skill", the underlying technique is the same. We extracted the techniques and made them portable. diff --git a/skills/problem-solving/collision-zone-thinking/SKILL.md b/skills/problem-solving/collision-zone-thinking/SKILL.md new file mode 100644 index 000000000..dd8fff01a --- /dev/null +++ b/skills/problem-solving/collision-zone-thinking/SKILL.md @@ -0,0 +1,62 @@ +--- +name: Collision-Zone Thinking +description: Force unrelated concepts together to discover emergent properties - "What if we treated X like Y?" +when_to_use: Can't find approach that fits your problem. Conventional solutions feel inadequate. Need innovative solution. Stuck thinking inside one domain. Want breakthrough, not incremental improvement. +version: 1.0.0 +--- + +# Collision-Zone Thinking + +## Overview + +Revolutionary insights come from forcing unrelated concepts to collide. Treat X like Y and see what emerges. + +**Core principle:** Deliberate metaphor-mixing generates novel solutions. + +## Quick Reference + +| Stuck On | Try Treating As | Might Discover | +|----------|-----------------|----------------| +| Code organization | DNA/genetics | Mutation testing, evolutionary algorithms | +| Service architecture | Lego bricks | Composable microservices, plug-and-play | +| Data management | Water flow | Streaming, data lakes, flow-based systems | +| Request handling | Postal mail | Message queues, async processing | +| Error handling | Circuit breakers | Fault isolation, graceful degradation | + +## Process + +1. **Pick two unrelated concepts** from different domains +2. **Force combination**: "What if we treated [A] like [B]?" +3. **Explore emergent properties**: What new capabilities appear? +4. **Test boundaries**: Where does the metaphor break? +5. **Extract insight**: What did we learn? + +## Example Collision + +**Problem:** Complex distributed system with cascading failures + +**Collision:** "What if we treated services like electrical circuits?" + +**Emergent properties:** +- Circuit breakers (disconnect on overload) +- Fuses (one-time failure protection) +- Ground faults (error isolation) +- Load balancing (current distribution) + +**Where it works:** Preventing cascade failures +**Where it breaks:** Circuits don't have retry logic +**Insight gained:** Failure isolation patterns from electrical engineering + +## Red Flags You Need This + +- "I've tried everything in this domain" +- Solutions feel incremental, not breakthrough +- Stuck in conventional thinking +- Need innovation, not optimization + +## Remember + +- Wild combinations often yield best insights +- Test metaphor boundaries rigorously +- Document even failed collisions (they teach) +- Best source domains: physics, biology, economics, psychology diff --git a/skills/problem-solving/inversion-exercise/SKILL.md b/skills/problem-solving/inversion-exercise/SKILL.md new file mode 100644 index 000000000..529d6fc9c --- /dev/null +++ b/skills/problem-solving/inversion-exercise/SKILL.md @@ -0,0 +1,58 @@ +--- +name: Inversion Exercise +description: Flip core assumptions to reveal hidden constraints and alternative approaches - "what if the opposite were true?" +when_to_use: Stuck on assumptions you can't question. Solution feels forced. "This is how it must be done" thinking. Want to challenge conventional wisdom. Need fresh perspective on problem. +version: 1.0.0 +--- + +# Inversion Exercise + +## Overview + +Flip every assumption and see what still works. Sometimes the opposite reveals the truth. + +**Core principle:** Inversion exposes hidden assumptions and alternative approaches. + +## Quick Reference + +| Normal Assumption | Inverted | What It Reveals | +|-------------------|----------|-----------------| +| Cache to reduce latency | Add latency to enable caching | Debouncing patterns | +| Pull data when needed | Push data before needed | Prefetching, eager loading | +| Handle errors when occur | Make errors impossible | Type systems, contracts | +| Build features users want | Remove features users don't need | Simplicity >> addition | +| Optimize for common case | Optimize for worst case | Resilience patterns | + +## Process + +1. **List core assumptions** - What "must" be true? +2. **Invert each systematically** - "What if opposite were true?" +3. **Explore implications** - What would we do differently? +4. **Find valid inversions** - Which actually work somewhere? + +## Example + +**Problem:** Users complain app is slow + +**Normal approach:** Make everything faster (caching, optimization, CDN) + +**Inverted:** Make things intentionally slower in some places +- Debounce search (add latency → enable better results) +- Rate limit requests (add friction → prevent abuse) +- Lazy load content (delay → reduce initial load) + +**Insight:** Strategic slowness can improve UX + +## Red Flags You Need This + +- "There's only one way to do this" +- Forcing solution that feels wrong +- Can't articulate why approach is necessary +- "This is just how it's done" + +## Remember + +- Not all inversions work (test boundaries) +- Valid inversions reveal context-dependence +- Sometimes opposite is the answer +- Question "must be" statements diff --git a/skills/problem-solving/meta-pattern-recognition/SKILL.md b/skills/problem-solving/meta-pattern-recognition/SKILL.md new file mode 100644 index 000000000..d88dbd85b --- /dev/null +++ b/skills/problem-solving/meta-pattern-recognition/SKILL.md @@ -0,0 +1,54 @@ +--- +name: Meta-Pattern Recognition +description: Spot patterns appearing in 3+ domains to find universal principles +when_to_use: Same issue in different parts of codebase. Pattern feels familiar across projects. "Haven't I solved this before?" Different teams solving similar problems. Recurring solution shapes. +version: 1.0.0 +--- + +# Meta-Pattern Recognition + +## Overview + +When the same pattern appears in 3+ domains, it's probably a universal principle worth extracting. + +**Core principle:** Find patterns in how patterns emerge. + +## Quick Reference + +| Pattern Appears In | Abstract Form | Where Else? | +|-------------------|---------------|-------------| +| CPU/DB/HTTP/DNS caching | Store frequently-accessed data closer | LLM prompt caching, CDN | +| Layering (network/storage/compute) | Separate concerns into abstraction levels | Architecture, organization | +| Queuing (message/task/request) | Decouple producer from consumer with buffer | Event systems, async processing | +| Pooling (connection/thread/object) | Reuse expensive resources | Memory management, resource governance | + +## Process + +1. **Spot repetition** - See same shape in 3+ places +2. **Extract abstract form** - Describe independent of any domain +3. **Identify variations** - How does it adapt per domain? +4. **Check applicability** - Where else might this help? + +## Example + +**Pattern spotted:** Rate limiting in API throttling, traffic shaping, circuit breakers, admission control + +**Abstract form:** Bound resource consumption to prevent exhaustion + +**Variation points:** What resource, what limit, what happens when exceeded + +**New application:** LLM token budgets (same pattern - prevent context window exhaustion) + +## Red Flags You're Missing Meta-Patterns + +- "This problem is unique" (probably not) +- Multiple teams independently solving "different" problems identically +- Reinventing wheels across domains +- "Haven't we done something like this?" (yes, find it) + +## Remember + +- 3+ domains = likely universal +- Abstract form reveals new applications +- Variations show adaptation points +- Universal patterns are battle-tested diff --git a/skills/problem-solving/scale-game/SKILL.md b/skills/problem-solving/scale-game/SKILL.md new file mode 100644 index 000000000..4b71af360 --- /dev/null +++ b/skills/problem-solving/scale-game/SKILL.md @@ -0,0 +1,63 @@ +--- +name: Scale Game +description: Test at extremes (1000x bigger/smaller, instant/year-long) to expose fundamental truths hidden at normal scales +when_to_use: Unsure if approach will scale. Edge cases unclear. Want to validate architecture. "Will this work at production scale?" Need to find fundamental limits. +version: 1.0.0 +--- + +# Scale Game + +## Overview + +Test your approach at extreme scales to find what breaks and what surprisingly survives. + +**Core principle:** Extremes expose fundamental truths hidden at normal scales. + +## Quick Reference + +| Scale Dimension | Test At Extremes | What It Reveals | +|-----------------|------------------|-----------------| +| Volume | 1 item vs 1B items | Algorithmic complexity limits | +| Speed | Instant vs 1 year | Async requirements, caching needs | +| Users | 1 user vs 1B users | Concurrency issues, resource limits | +| Duration | Milliseconds vs years | Memory leaks, state growth | +| Failure rate | Never fails vs always fails | Error handling adequacy | + +## Process + +1. **Pick dimension** - What could vary extremely? +2. **Test minimum** - What if this was 1000x smaller/faster/fewer? +3. **Test maximum** - What if this was 1000x bigger/slower/more? +4. **Note what breaks** - Where do limits appear? +5. **Note what survives** - What's fundamentally sound? + +## Examples + +### Example 1: Error Handling +**Normal scale:** "Handle errors when they occur" works fine +**At 1B scale:** Error volume overwhelms logging, crashes system +**Reveals:** Need to make errors impossible (type systems) or expect them (chaos engineering) + +### Example 2: Synchronous APIs +**Normal scale:** Direct function calls work +**At global scale:** Network latency makes synchronous calls unusable +**Reveals:** Async/messaging becomes survival requirement, not optimization + +### Example 3: In-Memory State +**Normal duration:** Works for hours/days +**At years:** Memory grows unbounded, eventual crash +**Reveals:** Need persistence or periodic cleanup, can't rely on memory + +## Red Flags You Need This + +- "It works in dev" (but will it work in production?) +- No idea where limits are +- "Should scale fine" (without testing) +- Surprised by production behavior + +## Remember + +- Extremes reveal fundamentals +- What works at one scale fails at another +- Test both directions (bigger AND smaller) +- Use insights to validate architecture early diff --git a/skills/problem-solving/simplification-cascades/SKILL.md b/skills/problem-solving/simplification-cascades/SKILL.md new file mode 100644 index 000000000..33b9d5d31 --- /dev/null +++ b/skills/problem-solving/simplification-cascades/SKILL.md @@ -0,0 +1,76 @@ +--- +name: Simplification Cascades +description: Find one insight that eliminates multiple components - "if this is true, we don't need X, Y, or Z" +when_to_use: Code has many similar-looking implementations. Growing list of special cases. Same concept handled 5 different ways. Excessive configuration. Many if/else branches doing similar things. Complexity spiraling. +version: 1.0.0 +--- + +# Simplification Cascades + +## Overview + +Sometimes one insight eliminates 10 things. Look for the unifying principle that makes multiple components unnecessary. + +**Core principle:** "Everything is a special case of..." collapses complexity dramatically. + +## Quick Reference + +| Symptom | Likely Cascade | +|---------|----------------| +| Same thing implemented 5+ ways | Abstract the common pattern | +| Growing special case list | Find the general case | +| Complex rules with exceptions | Find the rule that has no exceptions | +| Excessive config options | Find defaults that work for 95% | + +## The Pattern + +**Look for:** +- Multiple implementations of similar concepts +- Special case handling everywhere +- "We need to handle A, B, C, D differently..." +- Complex rules with many exceptions + +**Ask:** "What if they're all the same thing underneath?" + +## Examples + +### Cascade 1: Stream Abstraction +**Before:** Separate handlers for batch/real-time/file/network data +**Insight:** "All inputs are streams - just different sources" +**After:** One stream processor, multiple stream sources +**Eliminated:** 4 separate implementations + +### Cascade 2: Resource Governance +**Before:** Session tracking, rate limiting, file validation, connection pooling (all separate) +**Insight:** "All are per-entity resource limits" +**After:** One ResourceGovernor with 4 resource types +**Eliminated:** 4 custom enforcement systems + +### Cascade 3: Immutability +**Before:** Defensive copying, locking, cache invalidation, temporal coupling +**Insight:** "Treat everything as immutable data + transformations" +**After:** Functional programming patterns +**Eliminated:** Entire classes of synchronization problems + +## Process + +1. **List the variations** - What's implemented multiple ways? +2. **Find the essence** - What's the same underneath? +3. **Extract abstraction** - What's the domain-independent pattern? +4. **Test it** - Do all cases fit cleanly? +5. **Measure cascade** - How many things become unnecessary? + +## Red Flags You're Missing a Cascade + +- "We just need to add one more case..." (repeating forever) +- "These are all similar but different" (maybe they're the same?) +- Refactoring feels like whack-a-mole (fix one, break another) +- Growing configuration file +- "Don't touch that, it's complicated" (complexity hiding pattern) + +## Remember + +- Simplification cascades = 10x wins, not 10% improvements +- One powerful abstraction > ten clever hacks +- The pattern is usually already there, just needs recognition +- Measure in "how many things can we delete?" diff --git a/skills/problem-solving/when-stuck/SKILL.md b/skills/problem-solving/when-stuck/SKILL.md new file mode 100644 index 000000000..fc5bc3358 --- /dev/null +++ b/skills/problem-solving/when-stuck/SKILL.md @@ -0,0 +1,88 @@ +--- +name: When Stuck - Problem-Solving Dispatch +description: Dispatch to the right problem-solving technique based on how you're stuck +when_to_use: Stuck on a problem. Conventional approaches not working. Need to pick the right problem-solving technique. Not sure which skill applies. +version: 1.0.0 +--- + +# When Stuck - Problem-Solving Dispatch + +## Overview + +Different stuck-types need different techniques. This skill helps you quickly identify which problem-solving skill to use. + +**Core principle:** Match stuck-symptom to technique. + +## Quick Dispatch + +```dot +digraph stuck_dispatch { + rankdir=TB; + node [shape=box, style=rounded]; + + stuck [label="You're Stuck", shape=ellipse, style=filled, fillcolor=lightblue]; + + complexity [label="Same thing implemented 5+ ways?\nGrowing special cases?\nExcessive if/else?"]; + innovation [label="Can't find fitting approach?\nConventional solutions inadequate?\nNeed breakthrough?"]; + patterns [label="Same issue in different places?\nFeels familiar across domains?\nReinventing wheels?"]; + assumptions [label="Solution feels forced?\n'This must be done this way'?\nStuck on assumptions?"]; + scale [label="Will this work at production?\nEdge cases unclear?\nUnsure of limits?"]; + bugs [label="Code behaving wrong?\nTest failing?\nUnexpected output?"]; + + stuck -> complexity; + stuck -> innovation; + stuck -> patterns; + stuck -> assumptions; + stuck -> scale; + stuck -> bugs; + + complexity -> simp [label="yes"]; + innovation -> collision [label="yes"]; + patterns -> meta [label="yes"]; + assumptions -> invert [label="yes"]; + scale -> scale_skill [label="yes"]; + bugs -> debug [label="yes"]; + + simp [label="skills/problem-solving/\nsimplification-cascades", shape=box, style="rounded,filled", fillcolor=lightgreen]; + collision [label="skills/problem-solving/\ncollision-zone-thinking", shape=box, style="rounded,filled", fillcolor=lightgreen]; + meta [label="skills/problem-solving/\nmeta-pattern-recognition", shape=box, style="rounded,filled", fillcolor=lightgreen]; + invert [label="skills/problem-solving/\ninversion-exercise", shape=box, style="rounded,filled", fillcolor=lightgreen]; + scale_skill [label="skills/problem-solving/\nscale-game", shape=box, style="rounded,filled", fillcolor=lightgreen]; + debug [label="skills/debugging/\nsystematic-debugging", shape=box, style="rounded,filled", fillcolor=lightyellow]; +} +``` + +## Stuck-Type → Technique + +| How You're Stuck | Use This Skill | +|------------------|----------------| +| **Complexity spiraling** - Same thing 5+ ways, growing special cases | skills/problem-solving/simplification-cascades | +| **Need innovation** - Conventional solutions inadequate, can't find fitting approach | skills/problem-solving/collision-zone-thinking | +| **Recurring patterns** - Same issue different places, reinventing wheels | skills/problem-solving/meta-pattern-recognition | +| **Forced by assumptions** - "Must be done this way", can't question premise | skills/problem-solving/inversion-exercise | +| **Scale uncertainty** - Will it work in production? Edge cases unclear? | skills/problem-solving/scale-game | +| **Code broken** - Wrong behavior, test failing, unexpected output | skills/debugging/systematic-debugging | +| **Multiple independent problems** - Can parallelize investigation | skills/collaboration/dispatching-parallel-agents | +| **Root cause unknown** - Symptom clear, cause hidden | skills/debugging/root-cause-tracing | + +## Process + +1. **Identify stuck-type** - What symptom matches above? +2. **Load that skill** - Read the specific technique +3. **Apply technique** - Follow its process +4. **If still stuck** - Try different technique or combine + +## Combining Techniques + +Some problems need multiple techniques: + +- **Simplification + Meta-pattern**: Find pattern, then simplify all instances +- **Collision + Inversion**: Force metaphor, then invert its assumptions +- **Scale + Simplification**: Extremes reveal what to eliminate + +## Remember + +- Match symptom to technique +- One technique at a time +- Combine if first doesn't work +- Document what you tried diff --git a/skills/research/ABOUT.md b/skills/research/ABOUT.md new file mode 100644 index 000000000..1dedb5820 --- /dev/null +++ b/skills/research/ABOUT.md @@ -0,0 +1,20 @@ +# Research Skills - Attribution + +This skill was derived from agent patterns in the [Amplifier](https://github.com/microsoft/amplifier) project. + +**Source Repository:** +- Name: Amplifier +- URL: https://github.com/microsoft/amplifier +- Commit: 2adb63f858e7d760e188197c8e8d4c1ef721e2a6 +- Date: 2025-10-10 + +## Skills Derived from Amplifier Agents + +**From knowledge-archaeologist agent:** +- tracing-knowledge-lineages - Understanding how ideas evolved over time to find old solutions for new problems and avoid repeating past failures + +## What Was Adapted + +The knowledge-archaeologist agent excels at temporal analysis of knowledge evolution, paradigm shift documentation, and preserving the "fossil record" of ideas. This skill extracts the core research techniques for understanding why current approaches exist before proposing changes. + +Adapted with practical search strategies (decision records, git archaeology, conversation history) and scoped for mature codebases (explicitly notes to skip for greenfield projects). diff --git a/skills/research/tracing-knowledge-lineages/SKILL.md b/skills/research/tracing-knowledge-lineages/SKILL.md new file mode 100644 index 000000000..8541416c7 --- /dev/null +++ b/skills/research/tracing-knowledge-lineages/SKILL.md @@ -0,0 +1,203 @@ +--- +name: Tracing Knowledge Lineages +description: Understand how ideas evolved over time to find old solutions for new problems and avoid repeating past failures +when_to_use: When problem feels familiar but can't remember details. When asked "why do we use X?". Before abandoning an approach, understand why it exists. When evaluating "new" ideas that might be revivals. When past attempts failed and need to understand why. When tracing decision genealogy. +version: 1.0.0 +--- + +# Tracing Knowledge Lineages + +## Overview + +Ideas have history. Understanding why we arrived at current approaches - and what was tried before - prevents repeating failures and rediscovers abandoned solutions. + +**Core principle:** Before judging current approaches or proposing "new" ones, trace their lineage. + +## When to Trace Lineages + +**Trace before:** +- Proposing to replace existing approach (understand why it exists first) +- Dismissing "old" patterns (they might have been abandoned for wrong reasons) +- Implementing "new" ideas (they might be revivals worth reconsidering) +- Declaring something "best practice" (understand its evolution) + +**Red flags triggering lineage tracing:** +- "This seems overcomplicated" (was it simpler before? why did it grow?) +- "Why don't we just..." (someone probably tried, what happened?) +- "This is the modern way" (what did the old way teach us?) +- "We should switch to X" (what drove us away from X originally?) + +## Tracing Techniques + +### Technique 1: Decision Archaeology + +Search for when/why current approach was chosen: + +1. **Check decision records** (common locations: `docs/decisions/`, `docs/adr/`, `.decisions/`, architecture decision records) +2. **Search conversations** (skills/collaboration/remembering-conversations) +3. **Git archaeology** (`git log --all --full-history -- path/to/file`) +4. **Ask the person who wrote it** (if available) + +**Document:** +```markdown +## Lineage: [Current Approach] + +**When adopted:** [Date/commit] +**Why adopted:** [Original problem it solved] +**What it replaced:** [Previous approach] +**Why replaced:** [What was wrong with old approach] +**Context that drove change:** [External factors, new requirements] +``` + +### Technique 2: Failed Attempt Analysis + +When someone says "we tried X and it didn't work": + +**Don't assume:** X is fundamentally flawed +**Instead trace:** +1. **What was the context?** (constraints that no longer apply) +2. **What specifically failed?** (the whole approach or one aspect?) +3. **Why did it fail then?** (technology limits, team constraints, time pressure) +4. **Has context changed?** (new tools, different requirements, more experience) + +**Document:** +```markdown +## Failed Attempt: [Approach] + +**When attempted:** [Timeframe] +**Why attempted:** [Original motivation] +**What failed:** [Specific failure mode] +**Why it failed:** [Root cause, not symptoms] +**Context at time:** [Constraints that existed then] +**Context now:** [What's different today] +**Worth reconsidering?:** [Yes/No + reasoning] +``` + +### Technique 3: Revival Detection + +When evaluating "new" approaches: + +1. **Search for historical precedents** (was this tried before under different name?) +2. **Identify what's genuinely new** (vs. what's rebranded) +3. **Understand why it died** (if it's a revival) +4. **Check if resurrection conditions exist** (has context changed enough?) + +**Common revival patterns:** +- Microservices ← Service-Oriented Architecture ← Distributed Objects +- GraphQL ← SOAP ← RPC +- Serverless ← CGI scripts ← Cloud functions +- NoSQL ← Flat files ← Document stores + +**Ask:** "What did we learn from the previous incarnation?" + +### Technique 4: Paradigm Shift Mapping + +When major architectural changes occurred: + +**Map the transition:** +```markdown +## Paradigm Shift: From [Old] to [New] + +**Pre-shift thinking:** [How we thought about problem] +**Catalyst:** [What triggered the shift] +**Post-shift thinking:** [How we think now] +**What was gained:** [New capabilities] +**What was lost:** [Old capabilities sacrificed] +**Lessons preserved:** [What we kept from old paradigm] +**Lessons forgotten:** [What we might need to relearn] +``` + +## Search Strategies + +**Where to look for lineage:** + +1. **Decision records** (common locations: `docs/decisions/`, `docs/adr/`, `.adr/`, or search for "ADR", "decision record") +2. **Conversation history** (search with skills/collaboration/remembering-conversations) +3. **Git history** (`git log --grep="keyword"`, `git blame`) +4. **Issue/PR discussions** (GitHub/GitLab issue history) +5. **Documentation evolution** (`git log -- docs/`) +6. **Team knowledge** (ask: "Has anyone tried this before?") + +**Search patterns:** +```bash +# Find when approach was introduced +git log --all --grep="introduce.*caching" + +# Find what file replaced +git log --diff-filter=D --summary | grep pattern + +# Find discussion of abandoned approach +git log --all --grep="remove.*websocket" +``` + +## Red Flags - You're Ignoring History + +- "Let's just rewrite this" (without understanding why it's complex) +- "The old way was obviously wrong" (without understanding context) +- "Nobody uses X anymore" (without checking why it died) +- Dismissing approaches because they're "old" (age ≠ quality) +- Adopting approaches because they're "new" (newness ≠ quality) + +**All of these mean: STOP. Trace the lineage first.** + +## When to Override History + +**You CAN ignore lineage when:** + +1. **Context fundamentally changed** + - Technology that didn't exist is now available + - Constraints that forced decisions no longer apply + - Team has different capabilities now + +2. **We learned critical lessons** + - Industry-wide understanding evolved + - Past attempt taught us what to avoid + - Better patterns emerged and were proven + +3. **Original reasoning was flawed** + - Based on assumptions later proven wrong + - Cargo-culting without understanding + - Fashion-driven, not needs-driven + +**But document WHY you're overriding:** Future you needs to know this was deliberate, not ignorant. + +## Documentation Format + +When proposing changes, include lineage: + +```markdown +## Proposal: Switch from [Old] to [New] + +### Current Approach Lineage +- **Adopted:** [When/why] +- **Replaced:** [What it replaced] +- **Worked because:** [Its strengths] +- **Struggling because:** [Current problems] + +### Previous Attempts at [New] +- **Attempted:** [When, if ever] +- **Failed because:** [Why it didn't work then] +- **Context change:** [What's different now] + +### Decision +[Proceed/Defer/Abandon] because [reasoning with historical context] +``` + +## Examples + +### Good Lineage Tracing +"We used XML before JSON. XML died because verbosity hurt developer experience. But XML namespaces solved a real problem. If we hit namespace conflicts in JSON, we should study how XML solved it, not reinvent." + +### Bad Lineage Ignorance +"REST is old, let's use GraphQL." (Ignores: Why did REST win over SOAP? What problems does it solve well? Are those problems gone?) + +### Revival with Context +"We tried client-side routing in 2010, abandoned it due to poor browser support. Now that support is universal and we have better tools, worth reconsidering with lessons learned." + +## Remember + +- Current approaches exist for reasons (trace those reasons) +- Past failures might work now (context changes) +- "New" approaches might be revivals (check for precedents) +- Evolution teaches (study the transitions) +- Ignorance of history = doomed to repeat it