Skip to content

Commit

Permalink
Merge pull request #1 from florafauna/master
Browse files Browse the repository at this point in the history
Getting latest version
  • Loading branch information
lewisblake authored May 11, 2020
2 parents e34b8ff + f803aaf commit 7826ea8
Show file tree
Hide file tree
Showing 8 changed files with 804 additions and 12 deletions.
6 changes: 6 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
.PHONY: all test

all: test

test:
pytest
18 changes: 6 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,21 +1,15 @@
# optimParallel-python
A parallel computing interface to the L-BFGS-B optimizer.

### Goal:
Provide a parallel version of `scipy.optimize.minimize(method=’L-BFGS-B’)`. This is, for each step of the optimization the objective function `fn` and all computations involved to evaluate its gradient `gr` are evaluated in parallel.
A parallel version of the L-BFGS-B optimizer of `scipy.optimize.minimize()`.
Using it can significantly reduce the optimization time. For an objective
function with p parameters the optimization speed increases by up to
factor 1+p, when no analytic gradient is specified and 1+p processor cores
with sufficient memory are available.

A similar extension of L-BFGS-B exists in the R package *optimParallel*:
A similar extension of the L-BFGS-B optimizer exists in the R package *optimParallel*:
- https://CRAN.R-project.org/package=optimParallel
- https://doi.org/10.32614/RJ-2019-030

### Milestones:
1. Create a class `fg`, which takes a function `f` and optionally its gradient `g`.
- `fg.f(x)` should evaluate `f` and `g` in parallel, store the return values in attributes, and return `f(x)`.
- `fg.g(x)` if `x` was already evaluated via `fg.f(x)`, return `g(x)` without doing any computations.
2. Create the function `optimParallel()` that evaluates `scipy.optimize.minimize(method=’L-BFGS-B’)` in parallel using the `fg` class.
3. Create unit tests characterizing the desired behavior of `optimParallel()`. Take into account all options of `scipy.optimize.minimize(method=’L-BFGS-B’)`.
4. Add functionalities to `optimParallel()` and `fg` until all tests from 3. work as expected.
5. Write documentation.

### Contributions:
Contributions via pull requests are welcome.
46 changes: 46 additions & 0 deletions expamle.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
""" Example of `minimize_parallel()` """
from minipar.minipar import minimize_parallel
from scipy.optimize import minimize
import numpy as np
import time
from timeit import default_timer as timer

def f(x, sleep_secs=.5):
print('fn')
time.sleep(sleep_secs)
return sum((x-14)**2)

o1 = minimize_parallel(fun=f, x0=np.array([10,20]), args=(.5))
print(o1)

## test against scipy.optimize.minimize()
o2 = minimize(fun=f, x0=np.array([10,20]), args=(.5))
all(np.isclose(o1.x, o2.x, atol=1e-5))
np.isclose(o1.fun, o2.fun, atol=1e-5)
all(np.isclose(o1.jac, o2.jac, atol=1e-5))

## timing results
o1_start = timer()
o1 = minimize_parallel(fun=f, x0=np.array([10,20]), args=(.5))
o1_end = timer()
o1_time = o1_end - o1_start
o2_start = timer()
o2 = minimize(fun=f, x0=np.array([10,20]), args=(.5))
o2_end = timer()
o2_time = o2_end - o2_start

print("Time parallel {:2.2}\nTime standard {:2.2} ".
format(o1_time, o2_time))

## example with gradient
def g(x, sleep_secs=.5):
print('gr')
time.sleep(sleep_secs)
return 2*(x-14)

o3 = minimize_parallel(fun=f, x0=np.array([10,20]), jac=g, args=(.5))
o4 = minimize(fun=f, x0=np.array([10,20]), jac=g, args=(.5))

all(np.isclose(o3.x, o4.x, atol=1e-5))
np.isclose(o3.fun, o4.fun, atol=1e-5)
all(np.isclose(o3.jac, o4.jac, atol=1e-5))
Empty file added minipar/__init__.py
Empty file.
Loading

0 comments on commit 7826ea8

Please sign in to comment.