Pick a model available in RunwayML. Run the model both as a preview and experiment with the "export" option. You can choose a model you worked with in class or a different one. Feel free to try more than one if you like. Consider the following questions: What is this model developed to do?
This model was developed to create a “deep dream” version of paintings which is when the patterns that are already existing are enhanced. This makes the painting look hallucogenic and dream-like.
Can you write a Model and Data "biography" that covers where it came from and what data was used to train it?
What data is being used to train the model is unclear. However, it comes from Alexander Mordvintsev, a Google engineer. It also states that the model uses a CNN to do the pattern enhancing.
Describe the results of working with the model, do they match your expectations?
The results are exactly what I expected from the cover image on the model. I started with Caravaggio’s Narcissus and after I put it through I got a piece that looked much more rainbow and surreal.
Can you "break" the model? In other words, use it in a way that it was intended for and what kinds of results do you get?
I tried putting a low-quality picture of my math homework to see what it would do. While it did seem to work as intended, it did not produce an effect that the model was trying to. In the overview, it is explained that dreaming is producing images that produce a certain activation in a trained deep network, which I don’t think this image really works for.
Document your thoughts on the above questions and your experience working with RunwayML in a blog post. Include screenshots and screen captures of your workflow. Compare and contrast working with RunwayML as a tool for machine learning as related to ml5.js, python, and any other tools explored this semester
Working in Runway has been really exciting because this is such an exciting way to work with machine learning with a much lower barrier to entry than most machine learning. I feel like I learned a lot about the confines of machine learning but also what is possible by just looking through options of models on Runway. I did run into a few issues when trying to use the first model that I attempted. I’m not sure if I just ran the model for too short of a time, but I left it as it said waiting for data for over an hour and I didn’t get anything out of it. I checked to make sure that all my inputs were correct, but I still could not get it to work.
So, I moved onto a different model instead. This model worked very well, better than most of the machine learning models I used on Runway. It was very exciting getting it to work on anything, I tried a painting which it worked really well on. I tried to do the same on my math homework and the model still worked really well even if it didn’t have the same kind of effect as the first attempt. It also tried it on a painting that is less realistic and already kind of trippy, Starry Night. This turned out way different because Starry Night is already dreamy looking. So while it still worked, it just didn’t feel like much about the painting has changed. Runway is nice because it feels a lot easier to use than any of the other machine learning tools that I’ve learned so far. While we haven’t trained a model on this, other models that I’ve trained before have taken significantly more time and coding knowledge. Ml5 for example was too impossible to learn with some outside coding experience, but it did take some time to learn. Even harder than that, learning how to use webkit and other extensions for Python was difficult. Runway has been nothing in comparison to the amount of knowledge I needed to work with models through other mediums.