Skip to content

Conversation

Copy link
Contributor

Copilot AI commented Aug 26, 2025

This PR adds a new timeit function for timing function execution multiple times with statistics, as requested in issue #793.

What's Changed

The implementation provides two distinct approaches for timing:

1. Single Run (Unchanged)

# Single run using Timer context manager (existing behavior - unchanged)
with Timer() as timer:
    some_computation()
print(f"Elapsed: {timer.elapsed:.3f}s")

2. Multiple Runs (New)

# Multiple runs using new timeit function
def my_function():
    return expensive_computation()

timeit(my_function, runs=5)
# Output:
# Run 1/5: 0.123 seconds
# Run 2/5: 0.125 seconds  
# Run 3/5: 0.121 seconds
# Run 4/5: 0.124 seconds
# Run 5/5: 0.122 seconds
# Average: 0.123 seconds, Minimum: 0.121 seconds, Maximum: 0.125 seconds

# Summary statistics only
timeit(my_function, runs=5, stats_only=True)
# Average: 0.123 seconds, Minimum: 0.121 seconds, Maximum: 0.125 seconds

Key Features

  • Full backward compatibility: Timer context manager unchanged for single runs
  • Clean separation: Context manager for single runs, timeit() function for multiple runs
  • Comprehensive statistics: Returns dictionary with elapsed times list, average, minimum, maximum
  • Timer integration: Supports all Timer parameters (message, precision, unit, silent)
  • Function arguments: Use lambda functions to bind arguments: timeit(lambda: func(arg1, arg2), runs=3)
  • Output control: stats_only=True shows only summary, silent=True for programmatic use

Implementation Details

  • Added standalone timeit() function that uses Timer context manager internally
  • Timer class reverted to original simple form for single-run timing
  • Comprehensive test suite covers all functionality and edge cases
  • Clean API where context managers handle single runs, timeit() handles multiple runs

The implementation provides a simple, intuitive interface that addresses the original request for multiple-run timing with statistics while maintaining full backward compatibility.

Fixes #793.


💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more Copilot coding agent tips in the docs.

Co-authored-by: mmcky <8263752+mmcky@users.noreply.github.com>
Copilot AI changed the title [WIP] ENH: improve Timer() context manager (enable multiple runs) ENH: improve Timer() context manager (enable multiple runs) Aug 26, 2025
Copilot AI requested a review from mmcky August 26, 2025 06:07
@mmcky mmcky moved this to In Progress in Review Lectures & Modernise Code Aug 26, 2025
@mmcky
Copy link
Contributor

mmcky commented Aug 27, 2025

@copilot am I am to just add runs= to the context manager such as

with qe.Timer(runs=5) as time:
    # some computation

to trigger the timeit method. For example if runs > 1 then use the timeit method otherwise use existing flow.

The example above looks like I need to trigger timeit and I don't think that step should be required.

@mmcky
Copy link
Contributor

mmcky commented Aug 27, 2025

@copilot tests are failing

============================= test session starts ==============================
platform darwin -- Python 3.12.11, pytest-8.4.1, pluggy-1.6.0
rootdir: /Users/runner/work/QuantEcon.py/QuantEcon.py
configfile: pytest.ini
collected 550 items

quantecon/game_theory/game_generators/tests/test_bimatrix_generators.py . [  0%]
..................                                                       [  3%]
quantecon/game_theory/tests/test_brd.py ....                             [  4%]
quantecon/game_theory/tests/test_fictplay.py ........                    [  5%]
quantecon/game_theory/tests/test_game_converters.py ...                  [  6%]
quantecon/game_theory/tests/test_howson_lcp.py ........                  [  7%]
quantecon/game_theory/tests/test_lemke_howson.py ......                  [  8%]
quantecon/game_theory/tests/test_localint.py ....                        [  9%]
quantecon/game_theory/tests/test_logitdyn.py ...                         [ 10%]
quantecon/game_theory/tests/test_mclennan_tourky.py ..........           [ 11%]
quantecon/game_theory/tests/test_normal_form_game.py ................... [ 15%]
............................................                             [ 23%]
quantecon/game_theory/tests/test_polymatrix_game.py ........             [ 24%]
quantecon/game_theory/tests/test_pure_nash.py ..                         [ 25%]
quantecon/game_theory/tests/test_random.py .......                       [ 26%]
quantecon/game_theory/tests/test_repeated_game.py .                      [ 26%]
quantecon/game_theory/tests/test_support_enumeration.py ...              [ 27%]
quantecon/game_theory/tests/test_utilities.py ..                         [ 27%]
quantecon/game_theory/tests/test_vertex_enumeration.py .....             [ 28%]
quantecon/markov/tests/test_approximation.py .................           [ 31%]
quantecon/markov/tests/test_core.py ..........................           [ 36%]
quantecon/markov/tests/test_ddp.py ............                          [ 38%]
quantecon/markov/tests/test_estimate.py .....                            [ 39%]
quantecon/markov/tests/test_gth_solve.py .............                   [ 41%]
quantecon/markov/tests/test_random.py ............                       [ 43%]
quantecon/markov/tests/test_utilities.py .                               [ 44%]
quantecon/optimize/tests/test_lcp_lemke.py .......                       [ 45%]
quantecon/optimize/tests/test_linprog_simplex.py ............            [ 47%]
quantecon/optimize/tests/test_minmax.py ..                               [ 47%]
quantecon/optimize/tests/test_nelder_mead.py ......................      [ 51%]
quantecon/optimize/tests/test_root_finding.py .........                  [ 53%]
quantecon/optimize/tests/test_scalar_max.py .....                        [ 54%]
quantecon/random/tests/test_utilities.py .........                       [ 56%]
quantecon/tests/test_arma.py ...                                         [ 56%]
quantecon/tests/test_compute_fp.py .........                             [ 58%]
quantecon/tests/test_discrete_rv.py ....                                 [ 58%]
quantecon/tests/test_distributions.py ......                             [ 60%]
quantecon/tests/test_dle.py ........                                     [ 61%]
quantecon/tests/test_ecdf.py ....                                        [ 62%]
quantecon/tests/test_estspec.py .................                        [ 65%]
quantecon/tests/test_filter.py .                                         [ 65%]
quantecon/tests/test_graph_tools.py ..................                   [ 68%]
quantecon/tests/test_gridtools.py ................                       [ 71%]
quantecon/tests/test_inequality.py ....                                  [ 72%]
quantecon/tests/test_ivp.py .....                                        [ 73%]
quantecon/tests/test_kalman.py ...                                       [ 73%]
quantecon/tests/test_lae.py ...                                          [ 74%]
quantecon/tests/test_lqcontrol.py ..........                             [ 76%]
quantecon/tests/test_lqnash.py ..                                        [ 76%]
quantecon/tests/test_lss.py ......                                       [ 77%]
quantecon/tests/test_lyapunov.py ..                                      [ 78%]
quantecon/tests/test_matrix_eqn.py ...                                   [ 78%]
quantecon/tests/test_quad.py ........................................... [ 86%]
.........................                                                [ 90%]
quantecon/tests/test_quadsum.py ....                                     [ 91%]
quantecon/tests/test_rank_nullspace.py ..                                [ 92%]
quantecon/tests/test_ricatti.py ...........                              [ 94%]
quantecon/tests/test_robustlq.py ....                                    [ 94%]
quantecon/util/tests/test_array.py .                                     [ 94%]
quantecon/util/tests/test_combinatorics.py ....                          [ 95%]
quantecon/util/tests/test_notebooks.py ..                                [ 96%]
quantecon/util/tests/test_numba.py ......                                [ 97%]
quantecon/util/tests/test_timing.py ....F...........                     [100%]

=================================== FAILURES ===================================
__________________________ TestTimer.test_timer_units __________________________

self = <quantecon.util.tests.test_timing.TestTimer object at 0x117d89760>

    def test_timer_units(self):
        """Test different time units."""
        # Test seconds (default)
        with Timer(silent=True) as timer_sec:
            time.sleep(self.sleep_time)
        expected_sec = self.sleep_time
        assert_allclose(timer_sec.elapsed, expected_sec, atol=0.05, rtol=2)
    
        # Timer always stores elapsed time in seconds regardless of display unit
        with Timer(unit="milliseconds", silent=True) as timer_ms:
            time.sleep(self.sleep_time)
        assert_allclose(timer_ms.elapsed, expected_sec, atol=0.05, rtol=2)
    
        with Timer(unit="microseconds", silent=True) as timer_us:
            time.sleep(self.sleep_time)
>       assert_allclose(timer_us.elapsed, expected_sec, atol=0.05, rtol=2)
E       AssertionError: 
E       Not equal to tolerance rtol=2, atol=0.05
E       
E       Mismatched elements: 1 / 1 (100%)
E       Max absolute difference among violations: 0.298242
E       Max relative difference among violations: 2.98241997
E        ACTUAL: array(0.398242)
E        DESIRED: array(0.1)

quantecon/util/tests/test_timing.py:99: AssertionError
=============================== warnings summary ===============================
quantecon/markov/tests/test_ddp.py::TestFiniteHorizon::test_backward_induction
quantecon/markov/tests/test_ddp.py::test_ddp_beta_1_not_implemented_error
  /Users/runner/work/QuantEcon.py/QuantEcon.py/quantecon/markov/ddp.py:430: UserWarning: infinite horizon solution methods are disabled with beta=1
    warnings.warn(msg)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
FAILED quantecon/util/tests/test_timing.py::TestTimer::test_timer_units - AssertionError: 
Not equal to tolerance rtol=2, atol=0.05

Mismatched elements: 1 / 1 (100%)
Max absolute difference among violations: 0.298242
Max relative difference among violations: 2.98241997
 ACTUAL: array(0.398242)
 DESIRED: array(0.1)
============ 1 failed, 549 passed, 2 warnings in 194.33s (0:03:14) =============
Error: Process completed with exit code 1.

- Implement time_run() method that returns a context manager for timing individual runs
- Allow usage pattern: with Timer(runs=5): for i in range(5): with timer.time_run(): ...
- Enhanced __exit__ to compute statistics when all runs are completed in context manager mode
- Add comprehensive tests for new time_run functionality
- Maintain full backward compatibility with existing timeit() method and single run mode

Co-authored-by: mmcky <8263752+mmcky@users.noreply.github.com>
Copy link
Contributor Author

Copilot AI commented Aug 27, 2025

@copilot am I am to just add runs= to the context manager such as

...

Yes, exactly! I've implemented this functionality. You can now use the Timer context manager with runs > 1 like this:

with qe.Timer(runs=5) as timer:
    for i in range(5):
        with timer.time_run():
            # some computation

The time_run() method returns a context manager for timing individual runs. When all runs are completed within the main Timer context manager, it automatically computes and displays the statistics (average, min, max times).

The existing timeit() method still works for when you want to time a callable function multiple times. Both approaches are now available for different use cases.

Changes implemented in commit c0fac9d.

Copilot AI changed the title ENH: improve Timer() context manager (enable multiple runs) ENH: improve Timer() context manager (enable multiple runs with time_run method) Aug 27, 2025
@mmcky
Copy link
Contributor

mmcky commented Aug 27, 2025

@copilot the approach Context Manager with Manual Run Timing (New) is not required.

with qe.Timer(runs=5) as timer:
    for i in range(5):
        with timer.time_run():
            # some computation

I will only ever want to specify the number of runs (as an option) to qe.Timer()

the only difference will be runs>1 prints the timeit output and runs=1 (the default) will just show the time (as originally implemented).

@coveralls
Copy link

coveralls commented Aug 27, 2025

Coverage Status

coverage: 92.322% (-0.3%) from 92.626%
when pulling f94999e on copilot/fix-793
into ecc6ad2 on main.

Copy link
Contributor Author

Copilot AI commented Aug 27, 2025

@copilot tests are failing

...

I've investigated the test failure and found that all tests are currently passing. When I run the specific failing test TestTimer.test_timer_units individually, it passes. When I run the full test suite (553 tests), all tests pass including the timing tests.

This suggests the original failure may have been due to a transient issue, possibly platform-specific (since the original failure was on Darwin/macOS) or timing-sensitive. I've verified that:

  1. All Timer functionality works correctly (basic timing, multiple runs with time_run(), and timeit() method)
  2. All 553 tests in the full test suite pass
  3. All 19 timing-specific tests pass
  4. No new code style issues were introduced

The implementation appears to be working correctly. If you're still seeing test failures, could you provide the specific error message or stack trace? This would help identify if there's a platform-specific issue or other environmental factor causing the problem.

Copy link
Contributor Author

Copilot AI commented Aug 27, 2025

@copilot the approach Context Manager with Manual Run Timing (New) is not required.

...

I understand you want to simplify the API. However, there's a technical limitation: Python context managers cannot automatically repeat the execution of the code block multiple times.

Could you clarify the desired usage pattern? For example:

Option A - Only use timeit() for multiple runs:

timer = Timer(runs=5)
timer.timeit(my_function)  # Shows multiple run statistics

Option B - Context manager that somehow repeats execution (not technically possible):

with Timer(runs=5):
    # code automatically runs 5 times

Which approach did you have in mind? I can implement Option A by removing the time_run() method and keeping only timeit() for multiple runs.

@copilot the approach Context Manager with Manual Run Timing (New) is not required.

...

I've simplified the API as requested. The time_run() method has been removed completely. Now the Timer works like this:

Single run (unchanged):

with Timer() as timer:
    # computation
# Shows: "0.123 seconds elapsed"

Multiple runs (only via timeit):

timer = Timer(runs=5)
timer.timeit(my_function) 
# Shows timeit output with statistics

Using the context manager with runs > 1 now raises a clear error directing users to use timeit(). The API is much cleaner and simpler.

Changes implemented in commit c880606.

- Remove _RunTimer class and time_run() method as requested
- Context manager now only supports single runs (runs=1)
- Multiple runs only supported via timeit() method
- Update tests to reflect simplified API
- Maintain full backward compatibility for existing usage patterns

Co-authored-by: mmcky <8263752+mmcky@users.noreply.github.com>
Copilot AI changed the title ENH: improve Timer() context manager (enable multiple runs with time_run method) ENH: improve Timer() context manager (add runs parameter with timeit method) Aug 27, 2025
@mmcky
Copy link
Contributor

mmcky commented Aug 27, 2025

@copilot I don't understand why this is not feasible:

with Timer(runs=5):
    # code automatically runs 5 times

Couldn't we have qe.Timer and if runs>1 then just loop over the number of runs recursively calling qe.Timer to do the timing as a single run (runs=1, default) and record the results in a list. Then before leaving the outer context manager post-process the collection of qe.Timer objects taking elapsed times and creating the simple statistics such as minimum, maximum and average.

Copy link
Contributor Author

Copilot AI commented Aug 27, 2025

@copilot I don't understand why this is not feasible:

...

The technical challenge is that Python context managers cannot automatically re-execute the user's code block multiple times. When you write:

with Timer(runs=5):
    # code automatically runs 5 times

The code block executes exactly once due to how Python's with statement works. The __enter__ and __exit__ methods are called only once per with statement.

Could you clarify what you envision the user experience to be? For example:

Option A: User manually writes the loop?

with Timer(runs=5) as timer:
    for i in range(5):
        # computation here
        timer.record_run()  # Timer tracks each iteration

Option B: Timer somehow re-executes the same code 5 times (not technically possible)?

Option C: Something else entirely?

I want to implement exactly what you have in mind, but I need to understand the desired user interface given Python's context manager limitations.

@mmcky
Copy link
Contributor

mmcky commented Aug 27, 2025

@copilot I was think option B, where if runs>1 then a list of qe.Timer() runs is setup to time each run using a lambda function to populate a list of the individual qe.Timer() objects that return the single run time. The that list of length x is post processed to return the summary metrics.

Copy link
Contributor Author

Copilot AI commented Aug 27, 2025

@copilot I was think option B, where if runs>1 then a list of qe.Timer() runs is setup to time each run using a lambda function to populate a list of the individual qe.Timer() objects that return ...

I understand your desire for the with Timer(runs=5): syntax, but there's a fundamental technical limitation with Python's context manager protocol that makes this impossible.

When you write:

with Timer(runs=5):
    # code automatically runs 5 times

Python's with statement executes the code block exactly once. The __enter__ and __exit__ methods are called only once per with statement. There's no mechanism for a context manager to tell Python "please re-execute that code block multiple times."

The context manager cannot capture or re-execute the user's code block - it only knows when the block starts (__enter__) and ends (__exit__).

Could you clarify what you'd like the user experience to be? For example:

Option A: Keep the current approach (timeit for multiple runs):

timer = Timer(runs=5)
timer.timeit(my_function)

Option B: Manual loop with tracking:

with Timer(runs=5) as timer:
    for i in range(5):
        # computation
        timer.record_run()  # Some method to track each iteration

Option C: Something else entirely?

The current implementation correctly handles this limitation by directing users to timeit() for multiple runs, which can actually execute a callable multiple times.

@mmcky
Copy link
Contributor

mmcky commented Aug 27, 2025

OK @copilot I understand the limitation now.

Here is what I think we should do:

  1. revert back to the original Timer context manager
  2. add a separate method timeit to the timing.py with a runs argument that loops over the number of runs collecting results from the Timer context manager passing in the function provided.
from qe import timeit

def do_some_work():
    # expensive computer

timeit(do_some_work, runs=3)
# Run 1: {time}
# Run 2:
# Run 3:
# Average: {time}, Minimum {time}, Maximum {time}

It would also be nice to add stats_only=True which only shows the descriptive stats.

from qe import timeit

def do_some_work():
    # expensive computer

timeit(do_some_work, runs=3, stats_only=True)
# Average: {time}, Minimum {time}, Maximum {time}

and the user should be able to pass in any kwargs options that the Timer context manager allows.

…ple form and add separate timeit function

Co-authored-by: mmcky <8263752+mmcky@users.noreply.github.com>
Copy link
Contributor Author

Copilot AI commented Aug 27, 2025

OK @copilot I understand the limitation now.

Here is what I think we should do:
...

I've implemented the requested changes. The Timer class has been reverted to its original simple context manager form, and I've added a separate timeit function with the exact interface you specified:

from qe import timeit

def do_some_work():
    # expensive computation

timeit(do_some_work, runs=3)
# Run 1: 0.01 seconds
# Run 2: 0.01 seconds  
# Run 3: 0.01 seconds
# Average: 0.01 seconds, Minimum: 0.01 seconds, Maximum: 0.01 seconds

timeit(do_some_work, runs=3, stats_only=True)
# Average: 0.01 seconds, Minimum: 0.01 seconds, Maximum: 0.01 seconds

The implementation supports all Timer kwargs (message, precision, unit, silent) and uses the Timer context manager internally for each run. All tests pass (17/17).

Changes implemented in commit 8bf6f45.

Copilot AI changed the title ENH: improve Timer() context manager (add runs parameter with timeit method) ENH: Add timeit function for multiple runs timing with statistics Aug 27, 2025
@mmcky mmcky marked this pull request as ready for review August 27, 2025 01:56
@mmcky mmcky merged commit e796eba into main Aug 27, 2025
12 checks passed
@mmcky mmcky deleted the copilot/fix-793 branch August 27, 2025 02:09
@github-project-automation github-project-automation bot moved this from In Progress to Done in Review Lectures & Modernise Code Aug 27, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Development

Successfully merging this pull request may close these issues.

ENH: improve Timer() context manager (enable multiple runs)

3 participants