Skip to content
mbredif edited this page Feb 16, 2016 · 17 revisions

General wiki

To consult the general wiki page of the itowns project, go to https://github.com/iTowns/itowns-project/wiki

Data preparation

Images

Images should be accessible over http using a url pattern (see below) Their format should be compatible with the THREE.js WebGL library, such as JPEG or PNG.

The image locations are provided by an url template, like images/140616/Paris-140616_0740-{cam}-00001_{pano}.jpg, where cam and pano are placeholders for the camera and panoramic names, respectively.

Images are, for now, considered to be composed of a set of panoramics where each panoramic is itself a set of unassembled images from a rigid multi-camera panoramic head. We also assume that the panoramic camera head is rigid such that the relative positions and rotations of image sensors are constant across panoramics. Thus image metadata is provided by two JSON files : panoramic poses and camera calibrations

TODO: Explicit the frame conventions and provide an illustration.

Panoramic poses

The panoramic head moving frame is defined by a JSON file providing an array of panoramic objects:

[
 {
   "pano": "0000482",
   "easting": 651187.76,
   "northing": 6861379.05,
   "altitude": 39.39,
   "heading": 176.117188,
   "roll": 0.126007,
   "pitch": 1.280821,
   "date": "2014-06-16T12:31:34.841"
 },
 ...
]
  • pano is the string id of this panoramic.
  • easting, northing, altitude is the origin of the moving panoramic frame defined in some static world coordinate system. The example above uses Lambert93 coordinates.
  • heading, roll and pitch provide the orientation of the moving frame using three Euler angles (in degrees).
  • date denotes the simultaneous time of acquisition of all the images of this panoramic as an RFC2822 or ISO 8601 date string.

Camera calibrations

Image calibration should be provided using a json file containing an array of object of the following form, which describe all the cameras of the panoramic head :

[
 {
  "cam": "300",
  'mask': 'images/mask300.jpg',
  "rotation": [
    0.000735648, -0.00148884, -0.999999,
    0.999998   ,  0.00205265,  0.000732591,
    0.00205156 , -0.999997  ,  0.00149035
  ],
  "position": [ -0.145711, -0.0008142, -0.867 ],
  "projection": [
    1150.66785706630299,    0               , 1030.29197487242254,
       0               , 1150.66785706630299, 1023.03935469545331,
       0               ,    0               ,    1
  ],
  "size" : [2048, 2048],
  "distortion": {
    "pps": [1042.178,1020.435],
    "poly357": [ -1.33791587603751819E-7, 3.47540977328314388E-14, -4.44103985918888078E-21 ]
  },
  "orientation": 3
 },
 ...
]
  • cam provides a camera name to the calibration (which may be used to generate image urls)

  • mask is an optional attribute which is an URL relative to this JSON file of à gray-level image. The mask is not necessarily of the same size as the image as it will be stretched automatically. White pixels are 100% masked and black pixels are fully kept. As such, not providing a mask image is equivalent to providing a 1x1 black image.

  • rotation is an array of 9 floats, describing a 3x3 rotation matrix from panoramic coordinates (to be defined) to camera coordinates (X along lines, Y along columns, Z along the optical axis)

  • position denotes the image center position with respect to the panoramic coordinate frame

  • projection provides a 3x3 projection matrix from the 3D camera frame to the projective 2D image frame (in pixels). The common case is to provide the following matrix, where focal denotes the focal length along rows and columns in pixels and ppa denotes the principal point of autocollimation in pixels :

      focal.x , 0       , ppa.x,
      0       , focal.y , ppa.y,
      0       , 0       , 1
    
  • distortion is optional. size is the sensor size (in pixels) and pps is the distortion center (in pixels).

  • poly357 provide the r3, r5 and r7 coefficients of a distortion polynom. ([0,0,0] means no distortion)

  • orientation adds a further rotation of 0,90,180 or 270 degrees around the optical axis. It is redundant with rotation and is likely to be deprecated in future releases. orientation:3 denotes no rotation.

For performance, you should resample some or all your images to remove any distortion and encode the mask in its alpha channel.

Terrain (DTM)

The terrain DTM should be provided as a JSON file containing an array of 3D position of the sensor. Here is an example from the sample Data, in Lambert 93: [{"e":651432.81,"n":6861436.46,"h":39.7}, {"e":651430.81,"n":6861420.57,"h":39.86}, ... {"e":651428.89,"n":6861405.6,"h":40.01} ]

Then those positions will be processed to create an elevation grid using sensor constant altitude.

PointCloud

Pointclouds should be served as binary files. By default it should be organized as: X,Y,Z,I, X,Y,Z,I,..., X,Y,Z,I. All in Float32. You can of course add a pivot in LaserCloud.js

To get the corresponding PointCloud for the Panoramic image, the acquisition time of both is needed. For this reason, by default, the application will look at the acquisition time of the panoramic in order to load the corresponding Pointclouds using the time in ms as filename. Ex from the sample data pointcloud, file 450608.bin contains 100ms of a lidar acquisition starting at 450608 * 100 ms the day D (directory of the data). In the sample data, we use 2 lidar files for the same period (100ms), one is 10% of the data, the other is 90% for faster loading (Cheap LOD).

More clearly what happens by default is:

  • Get the time of the Pointcloud acquisition wanted (Time of the current panoramic ex: 12h31 on the 14th of june 2014)
  • Load a few seconds of PointClouds around this time (140616/LR/450608.bin, 140616/LR/450609.bin,..., 140616/HR/450608.bin, 140616/HR/450609.bin,...)

Buildings

TODO

Textured buildings

TODO

Clone this wiki locally