Skip to content

Fix README typos #738

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
May 27, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,14 +42,14 @@ For a more detailed read, check our docs here: [docs.macrocosmos.ai/subnets/subn
You can also access Subnet 1, Apex via the API. Find out more here: [docs.macrocosmos.ai/developers/api-documentation/sn1-apex](https://docs.macrocosmos.ai/developers/api-documentation/sn1-apex)

### Mission Commander
This is an agentic LLM chatbot built into Gravity, designed to help you pick the right terms and phrases for your data-scraping needs. Simply tell it what information you want and it'll offer suggestions and help with brainstorming. Mission Commander is built with Subnet 1, Apex, also owned by Macrocosmos. It lowers the barrier-of-entry even.
This is an agentic LLM chatbot built into Gravity, designed to help you pick the right terms and phrases for your data-scraping needs. Simply tell it what information you want and it'll offer suggestions and help with brainstorming. Mission Commander is built with Subnet 1, Apex, also owned by Macrocosmos. It lowers the barrier to entry even further.

Try Mission Commander via Gravity here: [app.macrocosmos.ai/gravity](http://app.macrocosmos.ai/gravity)

For a more detailed read, check our docs here: [docs.macrocosmos.ai/constellation-user-guides/gravity](https://docs.macrocosmos.ai/constellation-user-guides/gravity)

### MCP (Macrocosmos Connect Protocol)
You can integrate Subnet 1, Apex, directly into Claude and Cursor via our MCP. This allows you to access our web-search options and inference via other routes, rather than only from our website. Will provide URL to Apex, api key, and a guide how to use the model.
You can integrate Subnet 1, Apex, directly into Claude and Cursor via our MCP. This allows you to access our web-search options and inference via other routes, rather than only from our website. It will provide a URL to Apex, an API key, and a guide on how to use the model.

Try the MCP by following our guide here: [docs.macrocosmos.ai/developers/tools/macrocosmos-mcp](https://docs.macrocosmos.ai/developers/tools/macrocosmos-mcp)

Expand All @@ -74,7 +74,7 @@ Thanks to its distributed architecture, Apex has the potential to provide infere
Moreover, many of the most valuable use cases for AI - including law, finance, and health - are so sensitive to data privacy that they cannot use centralized models. This creates a huge opportunity for decentralized competitors to provide decentralized alternatives that provide intelligence without amassing centralized data. As the most intelligent model on Bittensor, itself the foremost DeAI protocol on-chain, Apex is very well positioned to take advantage of this opportunity. For those who believe in DeAI, an investment in Subnet 1 represents a strong prospect.

### Interest from customers and users
Subnet 1, Apex has already built up strong support and positive user feedback. It consistently ranks in the top 10 subnets. It displays deep and broad market support. Along with Subnet 13, another Macrocosmos subnet, Apex was the only subnet to be available on Squad.ai, Rayon Labs' agent platform built on Sibnet 64, Chutes (itself the top-performing subnet on Bittensor) - showing its reliability, value, and desirability across Bittensor.
Subnet 1, Apex has already built up strong support and positive user feedback. It consistently ranks in the top 10 subnets. It displays deep and broad market support. Along with Subnet 13, another Macrocosmos subnet, Apex was the only subnet to be available on Squad.ai, Rayon Labs' agent platform built on Subnet 64, Chutes (itself the top-performing subnet on Bittensor) - showing its reliability, value, and desirability across Bittensor.

## Key Use Cases & Potential
Subnet 1, Apex, has a range of use-cases:
Expand All @@ -85,7 +85,7 @@ Subnet 1, Apex, has a range of use-cases:
Apex has the potential to become the flagship decentralized LLM experience across the tech world. By utilizing Bittensor's architecture, we offer speedy and low-cost inference that could soon rival SOTA models in the industry.

## The Team Behind Subnet 1
Subnet 1 was built by Dr. Steffen Cruz, AKA @Macrocrux, when he was CTO of Bittensor. Steffen has led Apex through multiple iterations, overseeing its evolution into Bittensor's premiere provider of decentralized intelligence.
Subnet 1 was built by Dr. Steffen Cruz, AKA @Macrocrux, when he was CTO of Bittensor. Steffen has led Apex through multiple iterations, overseeing its evolution into Bittensor's premier provider of decentralized intelligence.

Apex's engineering team is one of the most impressive on Bittensor. It includes Felix Quinque, who led its Chain of Thought, Reasoning, and Logits upgrades, Dmytro Brobenko with Organic Scoring and DeepResearcher, Rich Wardle's research and development, and Kalei Brady, who led GAN based architecture upgrade and leads SN1's Discord Community. It also receives the support of other Macrocosmos engineers, ensuring that Subnet 1 is one of the best-staffed projects on the protocol - all of which helps ensure its long-term viability.

Expand Down
10 changes: 5 additions & 5 deletions docs/stream_miner_template.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Creating Stream Miners

Miners for SN1 **must** support the StreamPromptingSynapse. This enables all miners on the network to stream batches of tokens to the validator. This has clear beneifts, such as:
Miners for SN1 **must** support the StreamPromptingSynapse. This enables all miners on the network to stream batches of tokens to the validator. This has clear benefits, such as:

1. Getting rewards for partial responses, and
2. Enabling better user-product interactivity when using a frontend.
Expand All @@ -9,10 +9,10 @@ Getting custom miners to use streaming is a large engineering effort. To make th

## Architecture

Miner architectures require that you are running a syncronous `forward` method, with an internal `async _forward` function. The code below provides a basic outline of how the `async _forward` function should be structured. There are two main points here:
Miner architectures require that you are running a synchronous `forward` method, with an internal `async _forward` function. The code below provides a basic outline of how the `async _forward` function should be structured. There are two main points here:

1. Adding data to the buffer and sending it when it reaches the `config.neuron.streaming_batch_size`
2. Sending the final buffer of data if inference is finished, and there are less tokens than the batch size.
2. Sending the final buffer of data if inference is finished, and there are fewer tokens than the batch size.

```python
def forward(self, synapse: StreamPromptingSynapse) -> Awaitable:
Expand Down Expand Up @@ -80,8 +80,8 @@ def forward(self, synapse: StreamPromptingSynapse) -> Awaitable:

HuggingFace miners require you to run a separate inference thread in the background, add to a queue, and manually clear it at the end of the `async _forward` method.

This branch contains multiple inplementations. To see:
This branch contains multiple implementations. To see:
1. Langchain+OpenAI implementation, refer to `prompting/miners/openai_miner.py`
2. HuggingFace implementation, refer to `prompting/miners/hf_miner.py`

It is **necessary** that forward method of the miner class returns this `synapse.create_streaming_response(token_streamer)`. As seen, the `token_streamer` is a partial function that takes in a `send` packet. This packet will be sent by the bittensor middleware to facilitate the communications between the validator and the miner. You do **not** need to modify any logic around the `send` packet, as this is the same for **all** miners.
It is **necessary** that the forward method of the miner class returns this `synapse.create_streaming_response(token_streamer)`. As seen, the `token_streamer` is a partial function that takes in a `send` packet. This packet will be sent by the bittensor middleware to facilitate the communications between the validator and the miner. You do **not** need to modify any logic around the `send` packet, as this is the same for **all** miners.
2 changes: 1 addition & 1 deletion shared/exceptions.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ def __init__(self, message="Maximum number of retries exceeded"):
class BittensorError(Exception):
"""Exception raised when an error is raised from the bittensor package"""

def __init__(self, message="An error from the Bittensor package occured"):
def __init__(self, message="An error from the Bittensor package occurred"):
self.message = message
super().__init__(self.message)

Expand Down