You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are enhancing our testing strategy to include tests that require the miner/validator to be running on the testnet. These tests will interact with the various endpoints of the nodes to ensure comprehensive coverage and reliability of our subnet.
Current Testing Strategy
Our current tests, as seen in the tests folder, primarily focus on mocking various components and verifying their behavior in isolation. For example:
Mock Subtensor Tests: Validate the behavior of MockSubtensor and its interactions with neurons.
Mock Metagraph Tests: Check the properties and interactions of MockMetagraph.
@pytest.mark.parametrize("n", [16, 32, 64])deftest_mock_metagraph(n):
mock_subtensor=MockSubtensor(netuid=1, n=n)
mock_metagraph=MockMetagraph(subtensor=mock_subtensor)
# Check axonsaxons=mock_metagraph.axonsassertlen(axons) ==n# Check ip and portforaxoninaxons:
assertisinstance(axon, bt.AxonInfo)
assertaxon.ip==mock_metagraph.default_ipassertaxon.port==mock_metagraph.default_port```### Mock Dendrite Timings: Ensure that MockDendrite respects timing constraints and handles timeouts correctly.```python@pytest.mark.parametrize("timeout", [0.1, 0.2])@pytest.mark.parametrize("min_time", [0, 0.05, 0.1])@pytest.mark.parametrize("max_time", [0.1, 0.15, 0.2])@pytest.mark.parametrize("n", [4, 16, 64])deftest_mock_dendrite_timings(timeout, min_time, max_time, n):
mock_wallet=Nonemock_dendrite=MockDendrite(mock_wallet)
mock_dendrite.min_time=min_timemock_dendrite.max_time=max_timemock_subtensor=MockSubtensor(netuid=1, n=n)
mock_metagraph=MockMetagraph(subtensor=mock_subtensor)
axons=mock_metagraph.axonsasyncdefrun():
returnawaitmock_dendrite(
axons,
synapse=PromptingSynapse(
roles=["user"], messages=["What is the capital of France?"]
),
timeout=timeout,
)
responses=asyncio.run(run())
forsynapseinresponses:
asserthasattr(synapse, "dendrite") andisinstance(
synapse.dendrite, bt.TerminalInfo
)
dendrite=synapse.dendrite# check synapse.dendrite has (process_time, status_code, status_message)forfieldin ("process_time", "status_code", "status_message"):
asserthasattr(dendrite, field) andgetattr(dendrite, field) isnotNone# check that the dendrite take between min_time and max_timeassertmin_time<=dendrite.process_timeassertdendrite.process_time<=max_time+0.1# check that responses which take longer than timeout have 408 status codeifdendrite.process_time>=timeout+0.1:
assertdendrite.status_code==408assertdendrite.status_message=="Timeout"assertsynapse.dummy_output==synapse.dummy_input# check that responses which take less than timeout have 200 status codeelifdendrite.process_time<timeout:
assertdendrite.status_code==200assertdendrite.status_message=="OK"# check that outputs are not empty for successful responsesassertsynapse.dummy_output==synapse.dummy_input*2# dont check for responses which take between timeout and max_time because they are not guaranteed to have a status code of 200 or 408
New Testing Strategy
To ensure our subnet's robustness, we will introduce tests that require the miner/validator to be running on the testnet. These tests will:
1. Hit Different Endpoints: Interact with various endpoints of the nodes to verify their responses and behavior under real network conditions.
2. Validate Network Interactions: Ensure that the nodes correctly handle requests, process transactions, and maintain network integrity.
3. Performance and Reliability: Measure the performance and reliability of the nodes under different scenarios, including stress tests and edge cases.
Implementation Plan
1. Setup Testnet Environment: Ensure that the testnet environment is properly configured and that the miner/validator nodes are running.
2. Write Integration Tests: Develop tests that interact with the testnet endpoints. These tests will be more comprehensive and cover scenarios that are not possible with mocked components.
3. Automate Test Execution: Integrate these tests into our CI/CD pipeline to ensure they are run automatically and consistently.
Example Test
Here is a conceptual example of how a test might look:
This test checks the status endpoint of a node running on the testnet, ensuring it is up and running.
Conclusion
By incorporating tests that interact with the testnet, we will achieve a higher level of confidence in the reliability and performance of our subnet. This approach will help us identify and address issues that may not be apparent in isolated, mocked tests.
The text was updated successfully, but these errors were encountered:
Summary for Implementing Tests for Our Subnet
Overview
We are enhancing our testing strategy to include tests that require the miner/validator to be running on the testnet. These tests will interact with the various endpoints of the nodes to ensure comprehensive coverage and reliability of our subnet.
Current Testing Strategy
Our current tests, as seen in the tests folder, primarily focus on mocking various components and verifying their behavior in isolation. For example:
Mock Subtensor Tests: Validate the behavior of MockSubtensor and its interactions with neurons.
Mock Metagraph Tests: Check the properties and interactions of MockMetagraph.
New Testing Strategy
To ensure our subnet's robustness, we will introduce tests that require the miner/validator to be running on the testnet. These tests will:
1. Hit Different Endpoints: Interact with various endpoints of the nodes to verify their responses and behavior under real network conditions.
2. Validate Network Interactions: Ensure that the nodes correctly handle requests, process transactions, and maintain network integrity.
3. Performance and Reliability: Measure the performance and reliability of the nodes under different scenarios, including stress tests and edge cases.
Implementation Plan
1. Setup Testnet Environment: Ensure that the testnet environment is properly configured and that the miner/validator nodes are running.
2. Write Integration Tests: Develop tests that interact with the testnet endpoints. These tests will be more comprehensive and cover scenarios that are not possible with mocked components.
3. Automate Test Execution: Integrate these tests into our CI/CD pipeline to ensure they are run automatically and consistently.
Example Test
Here is a conceptual example of how a test might look:
This test checks the status endpoint of a node running on the testnet, ensuring it is up and running.
Conclusion
By incorporating tests that interact with the testnet, we will achieve a higher level of confidence in the reliability and performance of our subnet. This approach will help us identify and address issues that may not be apparent in isolated, mocked tests.
The text was updated successfully, but these errors were encountered: