-
Notifications
You must be signed in to change notification settings - Fork 310
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FM's on simple Sklearn's boston data giving NaN's #3
Comments
+1 |
Did anyone actually run the Movielens example? I'm getting errors with kernel crash, maybe access violation. And it's just about using the example code and data (from 100k movielens). |
I am forking it and working on a patch. I'll let you know if it works. If you normalize the boston data set it seems to work...that is strange... Other sets seem to get alright answers, but there doesn't seem to be much I am working on a few other improvements (and changing some of the Best, Alex On Wed, Feb 25, 2015 at 9:44 AM, Rui Maia notifications@github.com wrote:
|
Hello (silkspace) |
Hi @silkspace, yea feature scaling is a fairly typical preprocessing step for many machine learning problems. See this for more detail: https://class.coursera.org/ml-003/lecture/21. My guess is that the unnormalized feature space blew up the gradients, but I'm going to take a closer look at this. The following code works:
@ruifpmaia I just reran the movielens example on my laptop and wasn't able to see a problem. Would you mind opening a new issue with some steps to reproduce your error? Thanks! |
Hi Corey, Thanks. We rewrote the whole shebang in the ALS formulation and On Wed, May 6, 2015 at 8:26 AM, Corey Lynch notifications@github.com
|
@silkspace I forgot to mention also that when trying this out on different datasets, the default settings may not be the best. We typically use cross validation to find suitable values for things like learning rate. |
Thanks Corey, We did the same (grid search + cross val) to find the hypers. What is Best, Silk On Wed, May 6, 2015 at 8:40 AM, Corey Lynch notifications@github.com
|
@silkspace are you going to release your fork? Or maybe PR against this repo for adding the ALS formulation? |
Hi All, We started over from scratch (Matt and I). It's not the same code. I just recently noticed this, https://github.com/ibayer/fastFM Best, Alex On Mon, May 25, 2015 at 5:35 AM, MLnick notifications@github.com wrote:
|
Thanks - will take a look at that link. Still, are you gonna open-source your new version? — On Mon, May 25, 2015 at 4:40 PM, silkspaceships notifications@github.com
|
Hi Corey, Not sure as it was a work product that my company now technically owns. We Best, Alex On Mon, May 25, 2015 at 7:55 AM, MLnick notifications@github.com wrote:
|
Cool (I'm Nick by the way - Corey is the library author :) The lib you linked to is pretty comprehensive, looks really good. Will test it out. Thanks! — On Mon, May 25, 2015 at 8:15 PM, silkspaceships notifications@github.com
|
Hi Nick, sorry for the mix up! Yeah, I think the lib I linked to has more functionality than my current Best, Alex On Mon, May 25, 2015 at 12:41 PM, MLnick notifications@github.com wrote:
|
He guys.. |
Just in case - I've run some simple benchmarks of pylibFM vs other LibFM implementations without tuning parameters and it gives bad results (much slower, fails on large datasets, etc.) Post with comparison and results Sadly, original LibFM easily won this competition. If developers of pylibFM are interested, the code of benchmarks may be found here. |
This is giving errors, am I missing something?
instantiate FM instance with 7 latent factors
Creating validation dataset of 0.01 of training for adaptive regularization
-- Epoch 1
Training log loss: nan
-- Epoch 2
Training log loss: nan
-- Epoch 3
Training log loss: nan
-- Epoch 4
Training log loss: nan
-- Epoch 5
Training log loss: nan
-- Epoch 6
Training log loss: nan
fm.v is also all nan.
The text was updated successfully, but these errors were encountered: