Skip to content

Commit b13feb0

Browse files
author
Kilian Fatras
committed
added gaussian test
1 parent 436b228 commit b13feb0

File tree

2 files changed

+43
-11
lines changed

2 files changed

+43
-11
lines changed

README.md

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,7 @@ It provides the following solvers:
2424
* Wasserstein Discriminant Analysis [11] (requires autograd + pymanopt).
2525
* Gromov-Wasserstein distances and barycenters ([13] and regularized [12])
2626
* Stochastic Optimization for Large-scale Optimal Transport (semi-dual problem [18] and dual problem [19])
27+
* Non regularized free support Wasserstein barycenters [20].
2728

2829
Some demonstrations (both in Python and Jupyter Notebook format) are available in the examples folder.
2930

@@ -163,7 +164,7 @@ The contributors to this library are:
163164
* [Stanislas Chambon](https://slasnista.github.io/)
164165
* [Antoine Rolet](https://arolet.github.io/)
165166
* Erwan Vautier (Gromov-Wasserstein)
166-
* [Kilian Fatras](https://kilianfatras.github.io/) (Stochastic optimization)
167+
* [Kilian Fatras](https://kilianfatras.github.io/)
167168

168169
This toolbox benefit a lot from open source research and we would like to thank the following persons for providing some code (in various languages):
169170

@@ -222,6 +223,8 @@ You can also post bug reports and feature requests in Github issues. Make sure t
222223

223224
[17] Blondel, M., Seguy, V., & Rolet, A. (2018). [Smooth and Sparse Optimal Transport](https://arxiv.org/abs/1710.06276). Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics (AISTATS).
224225

225-
[18] Genevay, A., Cuturi, M., Peyré, G. & Bach, F. (2016) [Stochastic Optimization for Large-scale Optimal Transport](arXiv preprint arxiv:1605.08527). Advances in Neural Information Processing Systems (2016).
226+
[18] Genevay, A., Cuturi, M., Peyré, G. & Bach, F. (2016) [Stochastic Optimization for Large-scale Optimal Transport](https://arxiv.org/abs/1605.08527). Advances in Neural Information Processing Systems (2016).
226227

227228
[19] Seguy, V., Bhushan Damodaran, B., Flamary, R., Courty, N., Rolet, A.& Blondel, M. [Large-scale Optimal Transport and Mapping Estimation](https://arxiv.org/pdf/1711.02283.pdf). International Conference on Learning Representation (2018)
229+
230+
[20] Cuturi, M. and Doucet, A. (2014) [Fast Computation of Wasserstein Barycenters](http://proceedings.mlr.press/v32/cuturi14.html). International Conference in Machine Learning

test/test_stochastic.py

Lines changed: 38 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -137,8 +137,8 @@ def test_stochastic_dual_sgd():
137137
# test sgd
138138
n = 10
139139
reg = 1
140-
numItermax = 300000
141-
batch_size = 8
140+
numItermax = 15000
141+
batch_size = 10
142142
rng = np.random.RandomState(0)
143143

144144
x = rng.randn(n, 2)
@@ -151,9 +151,9 @@ def test_stochastic_dual_sgd():
151151

152152
# check constratints
153153
np.testing.assert_allclose(
154-
u, G.sum(1), atol=1e-02) # cf convergence sgd
154+
u, G.sum(1), atol=1e-04) # cf convergence sgd
155155
np.testing.assert_allclose(
156-
u, G.sum(0), atol=1e-02) # cf convergence sgd
156+
u, G.sum(0), atol=1e-04) # cf convergence sgd
157157

158158

159159
#############################################################################
@@ -168,10 +168,11 @@ def test_dual_sgd_sinkhorn():
168168
# test all dual algorithms
169169
n = 10
170170
reg = 1
171-
nb_iter = 300000
172-
batch_size = 8
171+
nb_iter = 150000
172+
batch_size = 10
173173
rng = np.random.RandomState(0)
174174

175+
# Test uniform
175176
x = rng.randn(n, 2)
176177
u = ot.utils.unif(n)
177178
zero = np.zeros(n)
@@ -184,8 +185,36 @@ def test_dual_sgd_sinkhorn():
184185

185186
# check constratints
186187
np.testing.assert_allclose(
187-
zero, (G_sgd - G_sinkhorn).sum(1), atol=1e-02) # cf convergence sgd
188+
zero, (G_sgd - G_sinkhorn).sum(1), atol=1e-04) # cf convergence sgd
189+
np.testing.assert_allclose(
190+
zero, (G_sgd - G_sinkhorn).sum(0), atol=1e-04) # cf convergence sgd
191+
np.testing.assert_allclose(
192+
G_sgd, G_sinkhorn, atol=1e-04) # cf convergence sgd
193+
194+
# Test gaussian
195+
n = 30
196+
n_source = n
197+
n_target = n
198+
reg = 1
199+
numItermax = 150000
200+
batch_size = 30
201+
202+
a = ot.datasets.get_1D_gauss(n_source, m=15, s=5) # m= mean, s= std
203+
b = ot.datasets.get_1D_gauss(n_target, m=15, s=5)
204+
X_source = np.arange(n_source,dtype=np.float64)
205+
Y_target = np.arange(n_target,dtype=np.float64)
206+
M = ot.dist(X_source.reshape((n_source, 1)), Y_target.reshape((n_target, 1)))
207+
M /= M.max()
208+
209+
G_sgd = ot.stochastic.solve_dual_entropic(a, b, M, reg, batch_size,
210+
numItermax=nb_iter)
211+
212+
G_sinkhorn = ot.sinkhorn(a, b, M, reg)
213+
214+
# check constratints
215+
np.testing.assert_allclose(
216+
zero, (G_sgd - G_sinkhorn).sum(1), atol=1e-04) # cf convergence sgd
188217
np.testing.assert_allclose(
189-
zero, (G_sgd - G_sinkhorn).sum(0), atol=1e-02) # cf convergence sgd
218+
zero, (G_sgd - G_sinkhorn).sum(0), atol=1e-04) # cf convergence sgd
190219
np.testing.assert_allclose(
191-
G_sgd, G_sinkhorn, atol=1e-02) # cf convergence sgd
220+
G_sgd, G_sinkhorn, atol=1e-04) # cf convergence sgd

0 commit comments

Comments
 (0)