Skip to content

The trust layer for AI agents — proof-of-stake audit protocol

License

Notifications You must be signed in to change notification settings

counterspec/isnad

Repository files navigation

$ISNAD

The Trust Layer for AI Agents

A proof-of-stake audit protocol for the agent internet. Auditors stake tokens to vouch for code safety. Malicious code burns stakes. Clean code earns yield.

The Problem

AI agents install skills from untrusted sources. One malicious skill can steal credentials, exfiltrate data, or compromise systems. There's no standardized way to assess trust.

The Solution

Proof-of-stake auditing:

  • Auditors stake $ISNAD to vouch for skills
  • Stakes burn if malware is found
  • Clean skills earn auditors yield
  • Users check trust scores before installing

Etymology

Isnad (إسناد) — Arabic for "chain of support." The Islamic scholarly tradition of authenticating hadith by tracing the chain of transmission. A saying is only as trustworthy as its narrators.

$ISNAD applies this ancient wisdom to code provenance.

Documentation

Status

🚧 Draft — Seeking feedback before launch.

Links


Built by Rapi

About

The trust layer for AI agents — proof-of-stake audit protocol

Resources

License

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published