-
Notifications
You must be signed in to change notification settings - Fork 667
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Discussion: Synaptic 2.x #140
Comments
Important question btw. Do we actually need Neuron as an exposed entity? |
I guess that depends on the underlying implementation. Neuron and Layers can be replaced by just Networks that handle everything. The fact that Synaptic has Neurons is because when I first read Derek Monner's paper he described a generalized unit, that could use the same algorithm and behave differently according just to their position in the topology of the network (i.e. a self-connected neuron acts as a memory cell, a neuron gating that self-connection acts as a forget gate but the same neuron gating the input of the memory cell would act as an input gate filtering the noise. And all those neurons would essentially follow the same algorithm internally). That's what I found really cool about that paper and that's why I coded first the Neuron. Then the Layer is just an array of them, and a Network is an array of Layers. The advantage of having the individual units is that you can connect them in any way easily and try new topologies (Like LSTM w/wo forget gates, w/wo peepholes, w/wo connections among the memory cells, etc). But I know that the approach that other NN libraries take is more like matrix math at network level, instead of having the individual units. This is probably way better for optimization/parallelization so I'm up for it, as long as we can keep an easy and intuitive API, that allows the user to create flexible/complex topologies. |
I'm not totally clear on what it means to expose a neuron, but it was extremely important for my application to be able to clearly see the neurons (using toJSON). I trained the network using synaptic and implemented the results in another program (excel). |
From my understanding it would be more important to expose an ability to override/specify neurones' activation function than to have direct access to neurones. In that way a developer can concentrate on implementing higher level functionality, e.g. by stacking layers or networks and having access to neurones' activation functions to implement custom networks. I would love to contribute to the new version if any additional help is still required. Regards,
|
@jocooler That's a great note. From my point of view - removal of Neuron will significantly reduce memory consumption. However, this is a good reminder that we do need a human-readable export. |
Thanks for spearheading this thread @Jabher. From a user's perspective, I might add having improved documentation and examples. Especially since the site is down now (#141), it's harder to reference back to examples I've found in the past. Specifically, I think for my use case an LSTM may be a more natural approach, but I am hesitant to test it because I have not trained one before and the mechanism for RNNs seems quite different. Having more than 1 example (preferably 3+) would help as users can pick the one that best matches their use-case. It might help to encourage users to contribute their own examples as well (maybe something I can do if I figure this out myself). Another point should be on optimizations. I think a big reason people are using other libraries is due to limitations, especially in memory. It could help to have a short guide on the Wiki discussing how to set Node to allocate more memory before getting OOM, or using a mini batch approach for strategies that support it. Also, regarding exposing the Neuron, I may suggest something similar to compilers where the toJSON method can be human-friendly in debug mode and machine-friendly otherwise. I'm seeing my memory filled with Neurons when conducting a heap analysis. |
@olehf Convolutional (1d and 2d + max/avg pooling for them) are mentioned in design document, same for RNN - I think GRU and LSTM would be enough for average user. Activation functions should definitely be passable to any layer, only issue we can encounter is a custom implementation of them (as we will need multiple implementations of one function) so we should keep as much as possible in design. Help is (as usual) always appreciated. It is a lot of work to do, and any contribution will be significant. As soon as we will decide that we're good with this design, GH project board will be created with tasks to do. |
@schang933 there's new temp website - http://caza.la/synaptic/. Old one will be back soon. Speaking about examples - this is a good concern, but first we should implement an API itself.
That's totally correct, but we should keep in mind that Node.js is not the only runtime target. Multiple browsers (latest Edge, Firefox and Chrome) with WebGL/WebCL and workers are also a runtime targets, and nodejs-chakra is also a preferrable option as it looks like it do not have memory limitations at all (it should be checked). About Neuron exposion and keeping human-readable output - I totally agree. |
Hey, thanks for this great library! It is really nice that its development is still running. I am not an ML expert, and synaptic has helped me so much to understand how the neural networks work ,thanks especially to the prebuilt architectures. If you are dropping them from the core repo, please move them to a separate one, maybe with the examples, because it makes really simple for ML non-experts to get initiated and running. Cheers |
Most JavaScript related libraries are not actively maintained any more. Hope this one can keep going! |
import {
WebCL,
AsmJS,
WorkerAsmJS,
CUDA,
OpenCL,
CPU
} from 'synaptic/optimizers'; I think those shall be separated packages. In order to keep the original package as small and simple as possible.
|
I agree but it should be both options for easy installation
|
@menduz I'd suggest multiple modules + 1 meta-module ("synaptic" itself) to expose them all in that case? |
There's a nice line in TensorFlow docs:
ASAP we actually need only few functions, some incomplete port of TF would not be so hard to use, and it will deal with most of server-side performance issues, so we will actually be able to train with good speed, I think. |
normalizeNum(min, max, num) => 0->1 // (curried?),
deNormalizeNum(min, max, 0->1) => num // (curried?),
normalizeNumericalArray ([num]) => [0->1] // min and max from array
normalizeCategoricalArr([string]) => [0|1] //based on uniqueness.
etc... |
reinforcement learning sounds cool but in can be a module over synaptic instead of thing inside the core - possibly some additional package. See https://github.com/karpathy/convnetjs/blob/master/build/deepqlearn.js - it's not related to network itself, it's working over it. utils - we're planning to use Vectorious for matrix operations (https://github.com/mateogianolio/vectorious/wiki) which do have lot of nice functions (including vector normalization). For curried functions Ramda (http://ramdajs.com/) functional programming lib can be used. |
Just discovered about now the turbo.js project. Has anyone considered this as a optimizer for synaptic on the browser? |
@cusspvz I discovered it today too. Cazala is playing with https://github.com/MaiaVictor/WebMonkeys now as a back-end for v2, this is similar, and he says it's better, I agree as it supports back-end computations too. |
@rafis speaking of popular libs - probably the best reference is Keras (as it provides the most consistent API), I've been investigating through most of popular libs for that. But thanks a lot for ADNN - that lib looks very interesting, and can be investigated deeply. |
Gpu acceleration (gpu.js?) |
some method to modify weights of connections. I need it for evolution algorithm of NN. as a general approach - exposing of all the internal logic to public methods. NN has so many different use cases, and everyone needs his own configurations. |
hey guys, just to let you know, I'm playing in this repo trying to build something similar to what we have in the design draft, taking all the comments here into consideration. Feedback, comments, and critics are more than welcome. This is a WIP so don't expect it to work yet :P |
Nice work @cazala !! Are you expecting to have everything on the Engine so you can have better management on the import/export thing? I've built a kind of from scratch and opted out them to be external so I can have various types of neurons. Do you think you can have more than the "bias" processor neuron on this? EDIT: Also, I think it is important to define an API, or at least give some implementation room, for plugins. |
Thank you @cusspvz :)
|
Awesome, I've understood your structure just by looking at the code, seemed a lot clever when I've saw it! I've also saw that you're introducing flow layers for better piping which is awesome for assisted neural networks but I can't see how it might help building non-assisted ones. A brief story to explain what I mean with "processing neurons": I see myself as an explorer, in fact, I've been self-educating Machine Learning for a while and I came up with the Liquid-State Neural Network Architecture before having the knowledge about it. On the first versions of my Architecture, the one that was alike Liquid-State NNA, I had what I call "bias/gate/weight based neurons" working slightly different to include neuro-evolution in a different way of what people are doing. Each one had a lifespan that would be increased on each activation, once a neuron was dead, the network would notice it was ticking with lesser neurons and would compensate with a randomly placed one. Note: at this point, the network was already processing async, so didn't need inputs for having it doing something, a simple neuron change could trigger activations trough the net till an Output neuron. It worked awesome for directions and images but not so for sound patterns, so I've changed again the network and added more neurons:
All of this work is, by now, private and personal, but I would like to contribute or share if it could help the developments of the synaptic v2. I could see some of the things fitting into the "Layers" structure, but I have some doubts related with it such as:
a) I really like the way "babel" works out of the box using their name/prefix priorities, it could be an idea for use with the "backend", "layer" and so on like: new Network({
// under the hood would call 'synaptic-backend-nvidia-cuda'
backend: 'nvidia-cuda'
})
new Network({
// under the hood would call './backends/gpu.js'
backend: 'gpu'
})
new Network({
// under the hood would call './backends/web-worker.js'
backend: 'web-worker'
}) It should be easy to implement: b) We must have a stable API for backends and layers before the first release, which means we must think on one by now. c) When you mean "async", does it allow the network to trigger neurons as a callback (like multiple times) or only as a promise (where it just counts once when solved)? Edit: Thanks for your time! :) |
@cusspvz Thanks for such a great feedback! And yes, actual help will be greatly appreciated. There's lot of routine coming (lots of layers, lots of backends), and any help will be great. Speaking of what you're proposing: a) Actually it's already discussed, for now API is something like
Suggested back-ends are C++ binded TensorFlow (as it supports GPGPU, multiple CPU tricks and so on), AsmJS-compiled math engine (concurrent via webWorkers and same-thread implementations), raw JS engine, and WebCL (possibly via Webmonkeys). b) Agree. c) Promise way, possibly. Thing is that computations will be probably working in async way, so any math operation will turn into asyncronious one. |
I'm a noob at neural networks and haven't used yours, so excuse any insults or me completely missing the boat :) If you want to remove the Neuron to minimise memory consumption, maybe you can actually replace it with a lazy Neuron interface/class which would only read on lazy instantiation (when needed) from the better packed representation and could also allow you to modify all neurons at once (or maybe even individually?). |
@Oxygens Thanks, neurons doesn't even exist on the new draft! |
One question: |
Unless you want to be able to distribute it easily over threads, multiple
processes, or multiple machines, or you want to run it in conjunction with other asynchronous code, like if there is also an http front end to the application, or a GUI.
…On Sun, Aug 6, 2017 at 6:40 AM, buckle2000 ***@***.***> wrote:
One question:
Why mind async? Computation-massive code should run in sync for best
performance.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#140 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AKLqvQ6-gcKoj56HfZJ4eHSEF64W09ooks5sVaYsgaJpZM4J6oXe>
.
|
|
@buckle2000 if the heavy computation task takes on the frontend, we can utilize webworker. |
Is Synaptic 2 available anytime soon for preview? |
Here's a (young) library to seamlessly bind together master and workers: It's made by me, so if you need help or features, I'm here for you. |
I know you guys might have seen this. But I just wanted to throw this in here in case if you guys have not seen it. But there is a library being built in javascript specifically for Machine Learning and much like numpy for accurate and big numbers manipulation in javascript. And yup its a mathematical library. https://github.com/stdlib-js/stdlib So it can come in handy... ¯\(ツ)/¯ |
I started looking into synaptic and like how it's put together. I was hoping synaptic 2 will be available soon. What other JS libraries for machine learning and deep learning would you recommend? |
Thanks @vickylance |
Also, msgpack (schema-less **but also support schema) or flatbuffers (with schema) can be used to export/import data (e.g. networks). |
@buckle2000 @vickylance do you have any working exmples or repo using these libraries? |
@buckle2000 but it will only help in shrinking the size of the JSON network file right? So only the saving and retrieving time would be reduced also I am not sure if it will help in reducing the saving time because the encoding time may take longer than JSON. Only the file size will be reduced. But it wont help much on the processing of the network, the computation will still remain the same. Correct me if I am wrong? |
@playground As of right now I don't have anything with those libraries. Also there is a new Math library being built for JavaScript for machine learning because the default Math library in JavaScript and NodeJS is very erroneous. And it only offers a few mathematical functions and advanced trigonometric functions and other n-dimentional functions are not present and the ones that are present are using the default math library in JavaScript and NodeJS and hence every library on npm on mathematics is erroneous and not safe for Machine Learning. And they only use upto float32 which is pretty limited when it comes to ML. So, check out this math library which is being built very industry standard. And every mathematical functions have a citation paper associated with it to prove its accuracy. |
Hello and Thanks for the library, |
So this project is officially dead? |
Can someone help me on how to run the "paint an image" demo locally on my system? |
if i understand correctly, you want to reproduce the demo on localhost. For the purpose, you have the code of the demo at |
is it support cuda or gpu? |
Overall, I was thoroughly impressed with the implementations provided in Synaptic in v1. Obviously, Tensors are not necessary, but, do help you follow the correct rules when implementing models. I think it would be a nice thing to add as a backlog, lower priority item. Maybe even a configuration setting where you can configure Tensors on or off for added flexibility... |
@freelogic, check issue #245. I believe this is pretty up in the air at the moment. But good integration with GPUs would be make or break for me migrating my work over to Synaptic. |
@Jabher What's the current situation with v2? |
Is project still alive? |
is there gonna be a version 2...? ;-; |
Use can try this url
https://github.com/bigstepinc/jsonrpc-bidirectional
בתאריך יום ב׳, 6 בספט׳ 2021, 22:01, מאת Martin M. ***@***.***
…:
is there gonna be a version 2...? ;-;
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#140 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADFDYNXSNHY7MAJWAZ7MMADUAUFZHANCNFSM4CPKQXPA>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
|
That's a URL to a bidirectional rpc library... |
(edited because I missed the memo 🤦♂️) By the way, I've been wanting to play with the library for a while to test out some new architectures I've been conjuring up, but I feel like either there aren't many free/open source databases, or these databases seem a bit disconnected (for the lack of a better word); I could use a few recommendations for some databases of labeled and unlabeled data (preferably in JSON format if possible), please. Thank you. |
Also, if this is not already in the works, I'd consider switching to TS or ES modules. CommonJS is slowly being phased out. |
So, we want to make Synaptic more mature.
I've created a design draft to think on it.
https://github.com/cazala/synaptic/wiki/Design-draft:-2.x-API
Let's discuss.
The most significant changes in design:
They are actually an examples of "how-to" and so they are usually useless for real projects.
The text was updated successfully, but these errors were encountered: