Lets teach that dude to dance.
LeMoN (Learning Motion Network) is a tiny experiment conducted by four russian students. We are truying to teach an agent to dance in a 3D space based on the music playing.
Well. that's easy. We are teaching an agent to behave in a 3D environment based on his surroundings. In this case surroundings are described by music, but we could change it to camera images or 3D scanners' data. So, theoriticly, in the future we could apply the same architecture to solving more complex problems, like picking up big objects, based on their positions or changing one's path based on things in his way.
We have found mocaps of people dancing, plus recordings, that have been playing during the dance. (Data is not included, so write to any of the authors to get it). In the future we would have to record our own mocaps with additional data to train our network.
The exact architecture might have already changed, so check the NN's file for details. But in short, generation is made based on past movements and music, that has been playing for last N mileseconds.
Well, that's great. Contact any of us for information on contributing or GIVING US MONEY... Em... Investing.
P.S. If you have mocap gear, that you are willing to give us to use for a day or so, we will owe you a beer.