Skip to content

GSoC_2017

Fabien Spindler edited this page Mar 6, 2017 · 49 revisions

Project page for ViSP Google Summer of Code 2017 (GSoC 2017)

General Information

  • GSoC 2017 site
  • Timelines:
  • Important dates:
    • October 10’16 Program announced
    • January 19 16:00 UTC Mentoring organizations begin submitting appls to Google
    • February 9 16:00 UTC Mentoring organization application deadline
    • February 10 – 26 Google program administrators review organization applications
    • February 27 16:00 UTC List of accepted mentoring organizations published
    • February 27 – March 20 Potential student participants discuss application ideas with mentoring organizations
    • March 20 16:00 UTC Student application period opens
    • April 3 16:00 UTC Student application deadline
    • May 4 16:00 UTC Accepted student proposals announced
    • Community Bonding Period Students get to know mentors, read documentation, get up to speed to begin working on their projects
    • May 30 Coding officially begins!
    • Work Period Students work on their project with guidance from Mentors
    • June 26 16:00 UTC Mentors and students can begin submitting Phase 1 evaluations
    • June 30 16:00 UTC Phase 1 Evaluation deadline; Google begins issuing student payments
    • Work Period Students work on their project with guidance from Mentors
    • July 24 16:00 UTC Mentors and students can begin submitting Phase 2 evaluations Work Period Students continue working on their project with guidance from Mentors
    • July 28 16:00 UTC Phase 2 Evaluation deadline
    • August 21 – 29 16:00 UTC Final week: Students submit their final work product and their final mentor evaluation
    • August 29 – September 5 16:00 UTC Mentors submit final student evaluations
    • September 6 Final results of Google Summer of Code 2017 announced
    • Late October Mentor Summit at Google

Resources

How you will be evaluated if you are an accepted student

Student projects to be paid only if:

  • Midterm:
    • You must submit a pull request
      • Has the midterm objectives described in the corresponding project
      • Builds and passes continuous integration build bot (travis-ci)
      • Has appropriate Doxygen documentation
  • End of summer:
    • A full pull request
      • Has the "end of summer" objectives described in the corresponding project
      • Full Doxygen documentation
      • A tutorial if appropriate
      • A working example or demo
    • Create a video on YouTube that demonstrates your results

For students interested in applying

  1. For software development skills, please refer to the project description
  2. Take your time to learn about ViSP, watching some YouTube videos, reading tutorials, downloading it and launching tutorials or example.
  3. Ask to join the ViSP GSoC Forum List
    • Discuss projects below or other ideas with us between now and March
  4. In March, go to the GSoC site and sign up to be a student with ViSP
  5. Post the title of the project (from the list below or a new one if we have agreed) on the mailing list visp-gsoc-2017@inria.fr
    • Include name, email, age
    • Include how you think you are qualified to accomplish this project (skills, courses, relevant background)
    • Include country of origin, school you are enrolled in, Professor you work with (if any)
    • Include a projected timeline and milestones for the project
    • Precise which 3rd party libraries you plan to use
  6. If ViSP gets accepted as GSoC organisation this year and you’ve signed up for a project with us in March
    • We will dispatch the students and projects to the appropriate mentors
    • Accepted students will be posted on the GSoC site in May (and we will also notify the accepted students ourselves).

2017 project ideas

List of potential mentors (pairing of projects to mentors will be done when Google decides the number of slots assigned to ViSP):

List of potential backup mentors:

Project #1: Augmented reality demonstration with ViSP

  • Brief description:

    • ViSP offers several methods to track and estimate the pose of the camera. Basic methods intend to estimate the camera pose from 2D/3D point correspondences. 2D point coordinates are obtained by detecting and tracking fiducial markers using for instance the tracking of blobs (corresponding tutorial).
    • Advanced methods rely on the knowledge of the CAD model of the object to track. The model-based tracker (see tutorial) allows to track and estimate the camera pose of a markerless object using multiple types of primitives (edges, texture or both).
    • The following images illustrate the usage of these localization methods in an augmented reality application. In the left image, the camera pose is estimated from the tracking of 4 blobs. This pose is then exploited to project a virtual Ninja in the image (image in the middle). In the right image the model based tracker allows to localize the castel. A virtual car is then introduced in the scene. These results are obtained from the existing ViSP AR module based on Ogre3D.
      Original CAD model ViSP CAD model CAD model tracking
    • Since ViSP AR existing module is not compatible with mobile devices, this project aims to interface ViSP with Unity 3D that is fully cross-platform and to highlight these basic and advanced localization methods in an augmented reality demonstration with a smartphone or a digital tablet running Android or iOS. Up to now, ViSP is already compatible with Windows, Mac OSX and Linux, with iOS (see tutorial), and with WindowsStore8.1, WindowsPhone8.1 and Windows10 UWP. The support of Android is not yet available, but should be not so much difficult to introduce. This work will be explored by ViSP developer team but today we cannot guarantee that it will be fully achieved before beginning April.
  • Getting started: Student interested by this project could

    • Write a plugin to test the data communication between C++ and Unity: read an image from C++ and display it in Unity
    • Write a plugin to draw circle, line, and points in a texture/image.
  • Expected results:

    • Adapt ViSP for Unity. We expect here a getting started tutorial that shows how to use ViSP C++ code in Unity to do for example a simple matrix computation. A good starting is to look on Unity native plugin mechanism. (first evaluation)
    • Using the same concept, build ViSP with OpenCV 3rd party and call in Unity a function that needs ViSP and OpenCV. This example could convert a ViSP's image into an OpenCV's Mat using vpImageConvert class. (first evaluation)
    • Provide interconversion methods between Unity's 2DTexture and ViSP's image. (first evaluation)
    • Using Unity's WebCamTexture and ViSP blob tracking capabilities create a demonstration and tutorial. The initial position of the blob could be set by user interaction. (first evaluation)
    • Create an augmented reality demonstration with fiducial markers (4 blobs) observed by a classical webcam connected to a computer running Windows, Mac or Linux. The initial position of the blob is obtained by user interaction. (second evaluation)
    • Extend the previous augmented reality demonstration with an automized detection of the initial position of the 4 blobs using existing ViSP blob auto detection or other technics. (second evaluation)
    • Porting the previous demonstration to target a mobile device (Android or iOS). (last evaluation)
    • Extend to markerless model-based tracker and run a complete AR demo on a mobile platform. Initialization could be first done by user interaction, then in a second step if the device is powerful enough replaced by an initial localization using ViSP detection and localization tool based on OpenCV. (last evaluation)
  • Application instructions: Student should precise which material is used for the development (Windows, Mac OSX, Linux) as well as the mobile device that is targeted.

  • Knowledge prerequisites: C++, Unity. Android and/or iOS knowledge are a plus

  • Difficulty level: Middle

Project #2: Markerless model-based tracker CAD model editor

  • Brief description:

    • The markerless model-based tracker (see tutorial) uses our home-made CAD model file format (.cao) to describe the object to track (see examples). Only simple primitives, which correspond to object visible contours, can be tracked (lines, circles, cylinders). Currently, this means that the creation of this home-made file has to be made by hand. This is a big drawback as most of the time the 3D model of the object to track comes from a CAD software, modeled by complex meshes. Today, if the object is modeled in (.obj, .stl) the solution is to open this model in an existing modeler like Blender in order to identify the 3D coordinates of some points and then from this information to edit the .cao file.
    • Thus this project aims to provide 1/ dedicated plugin (ie Blender plugin) to edit and convert from classical 3D file format (for example .obj) to our home-made CAD model file format, 2/ to develop a very simple CAD model viewer that allows to add some specific characteristics to a face or a line such as a label or a level of details (lod).
    • The need of a home-made CAD model is illustrated in the next images where the left image corresponds to the full CAD model of the CubeSat satellite in .obj format that is already available on the web, the middle image corresponds to the modified model in .cao format that is compatible with ViSP and the right image shows how this model is used to track the satellite with the markerless model-based tracker. As you can see in the two last images, the home-made CAD model contains only lines corresponding to the visible contours.
      Original CAD model ViSP CAD model CAD model tracking
    • The goal of this project is clearly not to design a new 3D modeler like Blender, 3D Max or Maya... but more to extend existings cross-platform and open-source tools. From our point of view, a good candidate is Blender that could be extended with addon. The user may benefit from Blender existing editing capabilities that allows to remove points, merge faces, add new points in order to adapt the model for ViSP. After this edition part, the model could be exported and used by ViSP.
  • Getting started:

    • In http://visp-doc.inria.fr/download/mbt-model/ we provide the material (.obj, .cao, .xml) to track CubeSAT satellite.
    • Student interested by this project could model and track a simple object from a video acquired by a webcam. By simple object we mean for example a cereal box but any other objects could be considered.
    • Student interested could also track a more complex object like a plane (Boing KC-135 CAD model in .obj could be found on the web). This model could be converted by hand in .cao and a video could be rendered with Blender in order to test the tracking.
  • Expected results:

    • Create a Blender .cao exporter. A tutorial based on a use case should be proposed. It should show how from an existing .obj freely available on the web the user has to modify the 3D model with Blender to make it compatible with ViSP, to export the .cao model and use it to track the object with ViSP model-based tracker. A good object candidate would be a cereal box. (first evaluation)
    • Create a Qt .cao viewer. The viewer should allow the user to visualize the wireframe CAD model and change the viewpoint. (first evaluation)
    • Extend the Qt .cao viewer to allow displaying the normal of the faces. This feature is useful to check if all the normals are going outside the object in order to ensure visibility algorithms implemented in the model-based tracker are working. (second evaluation)
    • Extend the Qt .cao viewer introducing editing functionalities to add ViSP specific attributes to a polygon or a line (label or lod). Create a tutorial that explains how to use the viewer and how to introduce a new label and lod. This tool that should remain simple is of primary interest for user that already have the .cao CAD model and that don't want to investigate Blender. (second evaluation)
    • Extend the Qt viewer to consider a .init file associated to the .cao. It contains the 3D coordinates of at least 4 points the user has to click in the image to initialize the tracker. When the .init file exists besides the .cao the Qt viewer should be able to highlight the corresponding points (an idea is to display the point number 1, 2, 3, 4... as a label near the point). The user should also be able to select these 3D points in the viewer and save the corresponding .init file. (last evaluation)
    • Create a Blender .cao importer. This functionality will be useful to edit/adapt the model using Blender capabilities. (last evaluation)
    • Since the Blender .cao importer will loose ViSP specific attributes like the label or lod create a Blender plugin able to set this information from an input .cao that already contains this information but also to edit the model introducing label and lod in Blender. This last functionality should allow the user to import a .cao model with label and lod and export the model in a new .cao file without lost of information. It means that the Blender exporter expected in the first evaluation should be extended with label and lod. Create the corresponding tutorial that could consider a cereal box modeled in .cao. (last evaluation)
  • Knowledge prerequisites: C++, Python, Qt, Blender

  • Difficulty level: Middle