Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement the "imitation game algorithm" by McLennan-Tourky #273

Merged
merged 12 commits into from
Dec 14, 2016

Conversation

oyamad
Copy link
Member

@oyamad oyamad commented Nov 14, 2016

Based on #268.

Implement the fixed point computation algorithm by McLennan and Tourky "From Imitation Games to Kakutani."

  • It computes an approximate fixed point of a function that satisfies the assumptions of Brouwer's fixed point theorem, i.e., a continuous function that maps a compact convex set to itself.

  • For contraction mappings, the generated sequence is the same as that by function iteration, so for those functions there is no improvement.

An example from McLennan and Tourky (Example 4.6):

def f(x, M, c):
    return -np.arctan(np.dot(M, (x - c)**3)) + c

n = 500
tol = 1e-5
max_iter = 200
c = np.random.standard_normal(n)
np.random.seed(0)
M = np.abs(np.random.standard_normal(size=(n, n)))
x_init = (np.random.rand(n)-1/2)*np.pi + c

x_star = qe.compute_fixed_point(f, x_init, tol, max_iter=max_iter,
                                method='imitation_game', M=M, c=c)
Iteration    Distance       Elapsed (seconds)
---------------------------------------------
5            1.858e+00      6.937e-01         
10           1.186e-02      6.951e-01         
12           3.435e-08      6.956e-01         
Converged in 12 steps

(It runs faster in a second run or later.)

With the default method 'iteration':

x_star = qe.compute_fixed_point(f, x_init, tol, max_iter=max_iter,
                                print_skip=50, M=M, c=c)
Iteration    Distance       Elapsed (seconds)
---------------------------------------------
50           3.140e+00      7.266e-03         
100          3.140e+00      1.363e-02         
150          3.140e+00      1.989e-02         
200          3.140e+00      2.533e-02         
/usr/local/lib/python3.5/site-packages/quantecon/compute_fp.py:146: RuntimeWarning: max_iter attained before convergence in compute_fixed_point
  warnings.warn(_non_convergence_msg, RuntimeWarning)

@coveralls
Copy link

Coverage Status

Coverage decreased (-4.009%) to 82.727% when pulling 08d0264 on compute_fp_lemke_howson into 120ab73 on master.

@coveralls
Copy link

Coverage Status

Coverage decreased (-4.009%) to 82.727% when pulling c73ab02 on compute_fp_lemke_howson into 120ab73 on master.

@oyamad
Copy link
Member Author

oyamad commented Nov 14, 2016

Also add verbose levels as in QuantEcon/QuantEcon.jl#144:

In [1]: import quantecon as qe

In [2]: f = lambda x: 0.5 * x

In [3]: qe.compute_fixed_point(f, 1.0, verbose=0)
Out[3]: 0.0009765625

In [4]: qe.compute_fixed_point(f, 1.0, verbose=1)
Out[4]: 0.0009765625

In [5]: qe.compute_fixed_point(f, 1.0, verbose=2)
Iteration    Distance       Elapsed (seconds)
---------------------------------------------
5            3.125e-02      1.440e-04         
10           9.766e-04      2.570e-04         
Converged in 10 steps
Out[5]: 0.0009765625

In [6]: qe.compute_fixed_point(f, 1.0, verbose=1, max_iter=5)
/usr/local/lib/python3.5/site-packages/quantecon/compute_fp.py:146: RuntimeWarning: max_iter attained before convergence in compute_fixed_point
  warnings.warn(_non_convergence_msg, RuntimeWarning)
Out[6]: 0.03125

In [7]: qe.compute_fixed_point(f, 1.0, verbose=0, max_iter=5)
Out[7]: 0.03125

@oyamad
Copy link
Member Author

oyamad commented Nov 14, 2016

And do not raise a warning when iterate == max_iter but error <= error_tol:

In [8]: qe.compute_fixed_point(f, 1.0, error_tol=1e-4, max_iter=14)
Iteration    Distance       Elapsed (seconds)
---------------------------------------------
5            3.125e-02      1.431e-04         
10           9.766e-04      2.480e-04         
14           6.104e-05      3.309e-04         
Converged in 14 steps
Out[8]: 6.103515625e-05

@oyamad
Copy link
Member Author

oyamad commented Nov 14, 2016

The issue #267 still remains with the default method 'iteration':

In [1]: import quantecon as qe

In [2]: f = lambda x: 2 * x

In [3]: x0 = 0.09

In [4]: error_tol = 0.1

In [5]: x_star = qe.compute_fixed_point(f, x0, error_tol, verbose=2)
Iteration    Distance       Elapsed (seconds)
---------------------------------------------
1            9.000e-02      1.061e-04         
Converged in 1 steps

In [6]: abs(f(x_star) - x_star) <= error_tol
Out[6]: False

In [7]: x_star = qe.compute_fixed_point(f, x0, error_tol, verbose=2,
   ...:                                 method='imitation_game')
Iteration    Distance       Elapsed (seconds)
---------------------------------------------
1            9.000e-02      8.583e-05         
Converged in 1 steps

In [8]: abs(f(x_star) - x_star) <= error_tol
Out[8]: True

@coveralls
Copy link

Coverage Status

Coverage decreased (-3.8%) to 81.562% when pulling e35b46f on compute_fp_lemke_howson into e0bdaa7 on master.

@coveralls
Copy link

Coverage Status

Coverage decreased (-3.003%) to 82.372% when pulling cf4ceab on compute_fp_lemke_howson into e0bdaa7 on master.

@coveralls
Copy link

Coverage Status

Coverage decreased (-2.9%) to 82.461% when pulling 5f27eb9 on compute_fp_lemke_howson into e0bdaa7 on master.

@coveralls
Copy link

Coverage Status

Coverage decreased (-2.9%) to 82.461% when pulling 1f50ffa on compute_fp_lemke_howson into e0bdaa7 on master.

@oyamad
Copy link
Member Author

oyamad commented Nov 17, 2016

As a straightforward application of the imitation game algorithm to the best response correspondence, I added a function mclennan_tourky which computes a Nash equilibrium of an N-player normal form game.

Example:

In [1]: import quantecon as qe

In [2]: g = qe.game_theory.random_game((2, 2, 2), random_state=111111)

In [3]: print(g)
3-player NormalFormGame with payoff profile array:
[[[[ 0.57640636,  0.71718743,  0.3026873 ],   [ 0.39465697,  0.65071946,  0.22605961]],
  [[ 0.52818592,  0.38285955,  0.04301255],   [ 0.09786016,  0.93869429,  0.32395105]]],

 [[[ 0.82034185,  0.44532268,  0.41417313],   [ 0.90843274,  0.94248307,  0.27377222]],
  [[ 0.00997283,  0.8286954 ,  0.50344024],   [ 0.70533234,  0.54602381,  0.35982827]]]]

In [4]: epsilon = 1e-3

In [5]: NE = qe.game_theory.mclennan_tourky(g, epsilon=epsilon)

In [6]: NE
Out[6]: 
(array([ 0.3473267,  0.6526733]),
 array([ 0.04723827,  0.95276173]),
 array([ 0.55683899,  0.44316101]))

In [7]: g.is_nash(NE, tol=epsilon)
Out[7]: True

@coveralls
Copy link

Coverage Status

Coverage decreased (-0.9%) to 82.503% when pulling 9463b33 on compute_fp_lemke_howson into 9004b35 on master.

@mmcky
Copy link
Contributor

mmcky commented Dec 8, 2016

Hi @oyamad. Is this PR ready for review?

@oyamad
Copy link
Member Author

oyamad commented Dec 8, 2016

@mmcky Yes, reviewing would be appreciated.

@jstac
Copy link
Contributor

jstac commented Dec 14, 2016

@oyamad @mmcky This looks really nice to me. I like the way that the fixed point routine is built into the functions in compute_fp.py. The code is clean and the tests are good.

I showed Rabee and Andy the code and they didn't provide detailed feedback but both seemed happy.

I think this is good to merge.

@mmcky mmcky merged commit 75e12dc into master Dec 14, 2016
@mmcky mmcky deleted the compute_fp_lemke_howson branch December 14, 2016 21:55
@oyamad
Copy link
Member Author

oyamad commented Dec 16, 2016

@mmcky @jstac Thanks!

Do you guys have any opinion about #273 (comment) (or #267)? This is a minor point, but the two methods 'iteration' and 'imitation_game' are not consistent in this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants