Skip to content

Docs: AI-Accelerant-Geometry whitepaper + benchmark workplan#92

Merged
arossti merged 6 commits intomainfrom
Accelerant
Feb 13, 2026
Merged

Docs: AI-Accelerant-Geometry whitepaper + benchmark workplan#92
arossti merged 6 commits intomainfrom
Accelerant

Conversation

@arossti
Copy link
Owner

@arossti arossti commented Feb 13, 2026

Summary

  • Add AI-Accelerant-Geometry whitepaper (.tex) exploring Quadray-RT for hardware-efficient neural computation
  • Add Accelerant.md tiered benchmark plan for spread-based AI claims
  • Revision pass narrowing claims and fixing errors

Changes

  • Geometry documents/Whitepaper LaTEX/AI-Accelerant-Geometry.tex: New whitepaper with companion findings, anticipated objections, benchmark references
  • Geometry documents/Accelerant.md: Tiered benchmark workplan aligned with revised paper claims

Generated with Claude Code

Co-Authored-By: Andy and Claude andy@openbuilding.ca

…icient neural computation

Compares Zhang (2025) Grassmann manifold AI with Quadray-RT algebra:
- Spread-based attention replacing softmax (exact rational, no transcendentals)
- Weierstrass position encoding replacing sinusoidal (5 arithmetic ops vs Taylor)
- Circulant rotor matrices replacing dense weights (O(d log d) vs O(d²))
- Quantization-friendly exactness: O(1) vs O(L) error through layers
- Tetrahedral 4-head topology matching multi-head attention structure

🤖 Co-Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Andy🤦‍♂️ & Claude🤖 <andy@openbuilding.ca>
…pated Objections

New material from companion whitepapers:
- Path C exact arithmetic (fractions.Fraction eliminates FP ambiguity)
- Tiered rational parameter search (Tier 1 denominators {2,3,4} find all primes)
- Central Symmetry Barrier (asymmetry enables prime/odd structure)
- Cartesian blind spot (coordinate choice determines visible structure)
- 4th Quadray parameter as shape information beyond position

Anticipated Objections section addressing peer review concerns:
1. Small-scale baseline handicap (Zhang 13-18M params)
2. Need for 3B/7B validation and distillation experiments
3. Architectural vs accuracy claims (the real contribution)
4. RWKV/Mamba/RetNet alternatives already exist (algebra orthogonal to architecture)
5. Squared dot products lose sign info (Janus polarity resolves)

🤖 Co-Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Andy🤦‍♂️ & Claude🤖 <andy@openbuilding.ca>
13 enumerated tests across 4 cost tiers:
- Tier 0 (minutes, CPU): raw FLOP count, INT8 fidelity, Weierstrass speed
- Tier 1 (hours, 1 GPU): GPT-2 attention swap, position swap, INT4 stress
- Tier 2 (days, 1 GPU): from-scratch training, Janus ablation, hybrid arch
- Tier 3 (days, multi-GPU): distillation, RWKV cross-architecture test

Test 1.3 (INT4 stress test) identified as make-or-break experiment.
Priority order and explicit success/failure criteria included.

🤖 Co-Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Andy🤦‍♂️ & Claude🤖 <andy@openbuilding.ca>
…lusion

Added [13] reference to companion benchmark workplan in Conclusion section.
Highlights the make-or-break INT4 test requires only 3 hours on one GPU.

🤖 Co-Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Andy🤦‍♂️ & Claude🤖 <andy@openbuilding.ca>
…rors

- Lead with rational closure as the novel contribution (rewritten abstract + thesis)
- Kill 10x FLOP claim: exp() is not the attention bottleneck, QK^T matmul is
- Reframe Weierstrass as niche (edge/embedded, learned rotations only)
- Narrow quantization claim: only scoring path, not full-network O(1) vs O(L)
- Fix circulant argument: 3x3 blocks have no FFT benefit, reframe as parameter efficiency
- Drop 4-head/tetrahedral analogy (dimensions dont match), note simplex ETF connection
- Rewrite Janus polarity section: remove false antonym/synonym cosine similarity claim
- Rewrite cost comparison table with explicit scope limitations
- Bump to Draft v0.2

🤖 Co-Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Andy🤦‍♂️ & Claude🤖 <andy@openbuilding.ca>
All 13 tests preserved — reframed what they claim to test, not what they do.
- Test 0.1: FLOP claim → determinism + normalization profiling
- Test 0.2: O(1) vs O(L) → scoring-path fidelity under INT8
- Test 0.3: Weierstrass speed → fixed-point fidelity (niche framing)
- Test 0.4: Circulant speed → parameter efficiency + expressiveness
- Test 1.2: Add integer-only variant for embedded context
- Test 2.1: Use standard position embeddings (isolate attention variable)
- Test 2.2: Antonym/synonym → sign information (negation sensitivity)
- Priority table reordered: 0.2 first (core claim), speed claim deprioritized
- Success criteria split: core claim vs secondary claims vs falsification
- Added "unexpected findings to watch for" section

🤖 Co-Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Andy🤦‍♂️ & Claude🤖 <andy@openbuilding.ca>
@arossti arossti self-assigned this Feb 13, 2026
@arossti arossti merged commit 343a919 into main Feb 13, 2026
2 checks passed
@arossti arossti deleted the Accelerant branch February 13, 2026 06:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant