Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a way for end users to test their strategies #28

Open
amortization opened this issue May 17, 2022 · 2 comments
Open

Add a way for end users to test their strategies #28

amortization opened this issue May 17, 2022 · 2 comments
Labels
enhancement New feature or request

Comments

@amortization
Copy link
Collaborator

I think that it would be useful if end users and developer had a way to test their strategies and ensure that they are working correctly without having to write Python tests for them.

How I picture this, is a function creating a CSV file with these fields.

  • Table Point
  • For each bet type chosen, that starting amount (ex. Starting PassLine, Starting Field, etc.)
  • Dice 1
  • Dice 2
  • for each bet type chosen, the ending amount (ex. Ending PassLine, Ending Field, etc.)

so the user could run

create_strategy_test_file(file='test_risk_12.csv', bet_types=(PassLine, Field, Place6, Place8))

which would create the blank CSV file with the headers: Table Point, Starting Bankroll, Starting PassLine, Starting Field, Starting Place6, Starting Place8, Dice1, Dice2, Ending Bankroll, Ending PassLine, Ending Field, Ending Place6, Ending Place8

Then the user would just need to enter their test into the CSV file, ex. if they wanted to simulate what happens for a Risk12 if the point is off and they roll a 4 they could do:

Off, 5, 5, 0, 0, 2, 2, 5, 0, 6, 6

This would show that for the point being off and rolling a 2, 2 the player would win the field bet and place the 6 and the 8 making the place6=6, the place8=8, and leaving the passline at 5.

then they could run the tests with

run_strategy_test_file(file='test_risk_12.csv', bet_types=(PassLine, Field, Place6, Place8), strategy=Risk12)

Which would give messages on tests that fail.

I would also envision this being run from the command prompt via argparse so users don't need to do any code. ex:

python crapssim.py create_strategy_test_file test_risk_12.csv --bet_types PassLine Field Place6 Place8
python crapssim.py run_strategy_test_file test_risk_12.csv -- --bet_types PassLine Field Place6 Place8

I think that this would be a useful way for players to write tests for their strategies without having to write any code.

I also think that it would be useful for development testing since in theory one could simulate entire games of running the strategies, record the outcomes to a CSV, and have integration tests (or even unit tests depending on how many and the length) running the entire games of each default Strategy.

@amortization amortization added the enhancement New feature or request label May 17, 2022
@skent259
Copy link
Owner

I think this is a great idea. By far the most time consuming aspect of writing a strategy is testing it, and this is a really clever way to do that testing.

I think there might be a few more pieces of information that could impact the strategy:

  • which number the point is on
  • whether there is a new shooter (maybe?)

The idea you mention for development testing might also be useful for an end user. I'm picturing that this would add only the unique scenarios for a given strategy. So they could either:

  • specify a few scenarios to test, then have the function check them explicitly
  • let the strategy run, function adds some rows to csv, then the user can manually check for accuracy

I think this would lead to 3 functions, create_strategy_test_file(), run_strategy_test_file(), and check_strategy_test_file(), where check does what you proposed in run, and run takes an argument for the number of rolls to consider.

@amortization
Copy link
Collaborator Author

I think there might be a few more pieces of information that could impact the strategy:

  • which number the point is on
  • whether there is a new shooter (maybe?)

For the point I pictured it either being the point number or "off" instead of splitting out the status and number. I think new shooter would be good, but I might make it (and point) optional since 95% of the time new shooter won't matter.

The idea you mention for development testing might also be useful for an end user. I'm picturing that this would add only the >unique scenarios for a given strategy. So they could either:

specify a few scenarios to test, then have the function check them explicitly
let the strategy run, function adds some rows to csv, then the user can manually check for accuracy

I like this idea a lot. I'm going to play around with it and might end up combining the two creation into one method with the option.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants