Skip to content
This repository has been archived by the owner on Sep 1, 2024. It is now read-only.

Algorithms comparison visualization #57

Open
7 of 8 tasks
litalmason opened this issue Dec 24, 2023 · 0 comments
Open
7 of 8 tasks

Algorithms comparison visualization #57

litalmason opened this issue Dec 24, 2023 · 0 comments
Assignees
Labels
feature New feature
Milestone

Comments

@litalmason
Copy link
Collaborator

litalmason commented Dec 24, 2023

Description

Create a table and graphs out of the JSON in the researcher Qujata portal.
Implement it according to the metrics and params that are already available.

Summary table: Algorithm | Iterations | Message size | CPU | Memory | Error rate | Bytes throughput | Messages throughput | TLS handshake time

Bar chart graphs:

X = Algorithm + Iteration + Message size, Y = Metric.

Total graphs = number of metrics count(testRuns[x].results])
Each combination of iterations + message size + algorithm will be represented as another “Bar” in the graph.
Example:

Algorithm1.cpu = 25.5  (1000, 20) 
Algorithm1.cpu = 40  (1000, 1024) 
Algorithm2.cpu = 30.5  (1000, 20)
Algorithm2.cpu = 50  (1000, 1024) 

Multi series graphs:
Line per algorithm, Y = Metric (e.g. Avg CPU), X = message size
Line per algorithm, Y = Metric (e.g. Avg CPU), X = number of iterations
Total graph combinations = numbers of metrics * 2 (1 message size + 1 for iterations)

Filters:
Algorithm(s) by name
Optional: Algorithm(s) by family (e.g. all BIKELs, all FRODOs.)
Optional: Algorithm(s) by NIST round (e.g. R1, R3, R5.)
Number of iterations
Message size

Acceptance Criteria

  1. Summary table with line for each test run
  2. 4 graphs, in two rows, per design in Figma (external contributors, please reach out if like to take this task)
    a. Each graph should allow the users to define the X and Y axis, as well as chart type, currently, two options only, line or bar charts
    b. See description above for more details
    c. Default should be
    1. CPU (combined for Server and Client) vs. number of iterations
    2. CPU (combined for Server and Client) vs. message size
    3. Memory (combined for Server and Client) vs. number of iterations
    4. Memory (combined for Server and Client) vs. message size
  3. Create filters that should apply for table as well as all graphs
    a. Filters button should be always visible
    b. Filter the following: Algorithm(s) by name, Algorithm(s) by family (e.g. all BIKELs, all FRODOs. optional for now.), Algorithm(s) by NIST round (e.g. R1, R3, R5. optional for now), Operating system, Number of iterations, Message size

Tasks

  • Summary table: Algorithm | Iterations | Message size | CPU | Memory | Error rate | Bytes throughput | Messages throughput | TLS handshake time
  • Bar chart graphs: X = Algorithm, Y = Metric
  • Multi series line chart graphs: Line per algorithm, Y = Metric (e.g. CPU), X = Message size
  • Filters: Should apply for table and the graphs alike. see details in acceptance criteria
  • Data visualization - backend support
  • [Homepage Tab] Experiment (Test Suite) Page
  • [All-Experiments Tab] All Experiments Page
  • [Homepage Tab] Latest Experiment - history view
@litalmason litalmason added the feature New feature label Dec 24, 2023
@litalmason litalmason added this to the 1.1.0 milestone Dec 24, 2023
@nganani nganani self-assigned this Jan 3, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
feature New feature
Projects
Status: No status
Development

No branches or pull requests

2 participants