-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No issue: question about faster algorithm #261
Comments
The settings discussed in this issue here: #240 Can create the best iterations, at iteration 50. I'd assume they are doing something similar. We can probably try to reverse engineer their settings, if they produce really good results. Can you provide a non shortened link directly to the actual app? |
This is unconfirmed and my personal assumption, but they might be using https://github.com/yusuketomoto/chainer-fast-neuralstyle I discovered it accidentally, while exploring work of Gene Kogan. It was used in Cubist Mirror, a project for an art exhibition. Processing happens almost realtime by doing a single forward pass, although it's much harder to obtain a decent style, since you need to train a separate network for it, which is probably why you cannot set a custom style in Prisma. |
I recently wrote a paper about using feedforward networks for real-time style transfer: http://arxiv.org/abs/1603.08155 chainer-fast-neuralstyle is a reimplementation of my paper; I agree that Prisma is likely using this in the backend. I haven't released my code for this paper yet, but I likely will. |
Very impressive jc! Please let us know when you do (hint hint)!! |
@ProGamerGov what do you mean by iteration 50? this command gives you good results at iteration 50? for me it does not, mind sharing some examples? th neural_style3.lua -image_size 1000 -content_image content.jpg -style_image image.jpg -content_layers relu4_2 -style_layers relu1_1,relu2_1,relu3_1,relu4_1,relu5_1 -style_layer_weights 1,1,1,20,1-content_weight 100 -style_weight 1500 -init image -normalize_gradients -num_iterations 1500 -backend cudnn -cudnn_autotune -optimizer adam congratulation to jcjohnson for the paper, it's amazing. |
@jcjohnson I'm looking forward to potentially experimenting with that code! @rayset Remove the style_weights option and play around with a learning_rate value of between 1 and 50. Though really low learning rate values that are closer to 0, seem to work best. This doesn't work well for every style, but I have found it does work for a sizeable amount of styles I tested. |
@ProGamerGov the default for leraning rate is 0.001, I tried with 0.00001 but no luck, basically the same output. what values have you tried? I don't think the learning rate is supposed to be higher than 1 |
@jcjohnson Very impressive!!!!!!!!!!!!!!!!!!!!!!! |
@jcjohnson @3DTOPO @6o6o @ProGamerGov |
Hi!
A new popular service has just launched - Prisma (https://goo.gl/nNtzDR) and they use neural networks to create filters. The results they get are very different, yet the speed is high.
They mentioned that they used the same algorithm as yours as a start. But their filters are super fast. It’s interesting how they did it. Do you you have any ideas how they did it?
The text was updated successfully, but these errors were encountered: