Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ class LBFGS(private var gradient: Gradient, private var updater: Updater)
* Set the convergence tolerance of iterations for L-BFGS. Default 1E-4.
* Smaller value will lead to higher accuracy with the cost of more iterations.
*/
def setConvergenceTol(tolerance: Int): this.type = {
def setConvergenceTol(tolerance: Double): this.type = {
this.convergenceTol = tolerance
this
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -195,4 +195,38 @@ class LBFGSSuite extends FunSuite with LocalSparkContext with Matchers {
assert(lossLBFGS3.length == 6)
assert((lossLBFGS3(4) - lossLBFGS3(5)) / lossLBFGS3(4) < convergenceTol)
}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The bug isn't found because we only test the static runLBFGS method instead of the class. We probably can change all the existing tests to use the one in class, so we don't need to add another test.

@mengxr what do you think?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry I didn't see this error either ... Let's keep both in tests for better test coverage.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We may add the same test to SGD as well. My bad. Our internal one is right. Probably when I copy and paste, I don't do thing right.

test("Optimize via class LBFGS.") {
val regParam = 0.2

// Prepare another non-zero weights to compare the loss in the first iteration.
val initialWeightsWithIntercept = Vectors.dense(0.3, 0.12)
val convergenceTol = 1e-12
val maxNumIterations = 10

val lbfgsOptimizer = new LBFGS(gradient, squaredL2Updater)
.setNumCorrections(numCorrections)
.setConvergenceTol(convergenceTol)
.setMaxNumIterations(maxNumIterations)
.setRegParam(regParam)

val weightLBFGS = lbfgsOptimizer.optimize(dataRDD, initialWeightsWithIntercept)

val numGDIterations = 50
val stepSize = 1.0
val (weightGD, _) = GradientDescent.runMiniBatchSGD(
dataRDD,
gradient,
squaredL2Updater,
stepSize,
numGDIterations,
regParam,
miniBatchFrac,
initialWeightsWithIntercept)

// for class LBFGS and the optimize method, we only look at the weights
assert(compareDouble(weightLBFGS(0), weightGD(0), 0.02) &&
compareDouble(weightLBFGS(1), weightGD(1), 0.02),
"The weight differences between LBFGS and GD should be within 2%.")
}
}