Skip to content
This repository was archived by the owner on Mar 11, 2021. It is now read-only.
This repository was archived by the owner on Mar 11, 2021. It is now read-only.

Dataset Tensorboard metrics #898

Open
@sethtroisi

Description

@sethtroisi

We have a number of metrics that measure a bunch of things

  • metrics about the model weights
    • l2_cost
  • model accuracy on constant data
    • value_cost on pro holdout
  • model accuracy on recent RL data
    • , policy_cost, policy_entropy, value_cost (diff between target and output)
    • value_confidence (what was the average value output)
  • RL data
    • policy target top 1 (what percents of readouts went to the top move?)
    • average_winrate_observed

I think we should add

  • value_cost_bias = value_cost(target_value, average_winrate_observed)
  • average_move = average move number of the position (requires adding to the tf examples)
  • search_q_error = |target_value - search_q| (requires adding to the tf example)

Anything else that would be interesting?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions