Skip to content
Mathieu Brédif edited this page Feb 19, 2016 · 17 revisions

General wiki

To consult the general wiki page of the itowns project, go to https://github.com/iTowns/itowns-project/wiki

Data preparation

Images

Images should be accessible over http using a url pattern (see below) Their format should be compatible with the THREE.js WebGL library, such as JPEG or PNG.

The image locations are provided by an url template, like images/140616/Paris-140616_0740-{cam.id}-00001_{pano.id:07}.jpg, where cam.id and pano.id are placeholders for the camera and panoramic names, respectively.

Images are, for now, considered to be composed of a set of panoramics where each panoramic is itself a set of unassembled images from a rigid multi-camera panoramic head. We also assume that the panoramic camera head is rigid such that the relative positions and rotations of image sensors are constant across panoramics. Thus image metadata is provided by two JSON files : panoramic poses and camera calibrations.

var images = {
  url       : "../{lod}/images/{YYMMDD}/Paris-{YYMMDD}_0740-{cam.id}-00001_{pano.id:07}.jpg",
  lods      : ['itowns-sample-data-small', 'itowns-sample-data'],
  cam       : "cameraCalibration.json",
  pano      : "panoramicsMetaData.json",
  buildings : "buildingFootprint.json",
  DTM       : "dtm.json",
  visible: true
}

Note that you can readily access IIP image servers (https://github.com/ruven/iipsrv) using a configuration like the following, which specifies 3 levels of details with increasing image sizes w and qualities q :

url : "http://your.website.com/cgi-bin/iipsrv.fcgi?FIF=/your/path/prefix-{cam.id}-00001_{pano.id}.jp2&WID={lod.w}&QLT={lod.q}&CVT=JPEG",
lods : [{w:32,q:50},{w:256,q:80},{w:2048,q:80}]

Each panoramic or camera JSON object will be used as a string format template substitution context using the string_format js library, https://github.com/monolithed/string_format. It provides very powerful customisation possibilities, as can be seen in https://github.com/iTowns/itowns.github.io/blob/master/v1demo.html. For instance, you can provide the following keys to provide custom template patterns :

UTCOffset : 15,
YYMMDD : function() {
  var d = new Date(this.pano.date);
  return (""+d.getUTCFullYear()).slice(-2) + ("0"+(d.getUTCMonth()+1)).slice(-2) + ("0" + d.getUTCDate()).slice(-2);
},
seconds : function() {
  var d = new Date(this.pano.date);
  return (d.getUTCHours()*60 + d.getUTCMinutes())*60+d.getUTCSeconds()-this.UTCOffset;
}

TODO: Explicit the frame conventions and provide an illustration.

Panoramic poses pano

The panoramic head moving frame is defined by a JSON file providing an array of panoramic objects:

[
 {
   "id": 482,
   "easting": 651187.76,
   "northing": 6861379.05,
   "altitude": 39.39,
   "heading": 176.117188,
   "roll": 0.126007,
   "pitch": 1.280821,
   "date": "2014-06-16T12:31:34.841Z"
   // custom key:value or key:function() pairs of your choice
 },
 ...
]
  • id is the id of this panoramic.
  • easting, northing, altitude is the origin of the moving panoramic frame defined in some static world coordinate system. The example above uses Lambert93 coordinates.
  • heading, roll and pitch provide the orientation of the moving frame using three Euler angles (in degrees).
  • date denotes the simultaneous time of acquisition of all the images of this panoramic as an RFC2822 or ISO 8601 date string.

Camera calibrations cam

Image calibration should be provided using a json file containing an array of object of the following form, which describe all the cameras of the panoramic head :

[
 {
  "id": "300",
  "mask": "images/mask300.jpg",
  "rotation": [
    0.000735648, -0.00148884, -0.999999,
    0.999998   ,  0.00205265,  0.000732591,
    0.00205156 , -0.999997  ,  0.00149035
  ],
  "position": [ -0.145711, -0.0008142, -0.867 ],
  "projection": [
    1150.66785706630299,    0               , 1030.29197487242254,
       0               , 1150.66785706630299, 1023.03935469545331,
       0               ,    0               ,    1
  ],
  "size" : [2048, 2048],
  "distortion": {
    "pps": [1042.178,1020.435],
    "poly357": [ -1.33791587603751819E-7, 3.47540977328314388E-14, -4.44103985918888078E-21 ]
  },
  "orientation": 3
   // custom key:value or key:function() pairs of your choice
 },
 ...
]
  • id provides a camera name to the calibration (which may be used to generate image urls)

  • mask is an optional attribute which is an URL relative to this JSON file of à gray-level image. The mask is not necessarily of the same size as the image as it will be stretched automatically. White pixels are 100% masked and black pixels are fully kept. As such, not providing a mask image is equivalent to providing a 1x1 black image.

  • rotation is an array of 9 floats, describing a 3x3 rotation matrix from panoramic coordinates (to be defined) to camera coordinates (X along lines, Y along columns, Z along the optical axis)

  • position denotes the image center position with respect to the panoramic coordinate frame

  • projection provides a 3x3 projection matrix from the 3D camera frame to the projective 2D image frame (in pixels). The common case is to provide the following matrix, where focal denotes the focal length along rows and columns in pixels and ppa denotes the principal point of autocollimation in pixels :

      focal.x , 0       , ppa.x,
      0       , focal.y , ppa.y,
      0       , 0       , 1
    
  • distortion is optional. size is the sensor size (in pixels) and pps is the distortion center (in pixels).

  • poly357 provide the r3, r5 and r7 coefficients of a distortion polynom. ([0,0,0] means no distortion)

  • orientation adds a further rotation of 0,90,180 or 270 degrees around the optical axis. It is redundant with rotation and is likely to be deprecated in future releases. orientation:3 denotes no rotation.

For performance, you should resample some or all your images to remove any distortion and encode the mask in its alpha channel.

Terrain DTM

The terrain DTM should be provided as a JSON file containing an array of 3D sensor positions. Here is an example from the sample Data, in Lambert 93:

[
  {"e":651432.81,"n":6861436.46,"h":39.7},
  {"e":651430.81,"n":6861420.57,"h":39.86},
  ...
]     

To get ground points from these sensor positions, we currently apply a constant hardcoded vertical offset of 2.1m. We plan These ground positions are then automatically processed to create an elevation grid that is used to guide the immersive navigation and provide the ground geometry during the immersive texture reprojections.

2D Buildings

Contrary to textured buildings as seen in fly mode, these buildings are never looked at directly and are only used as geometry proxies (similar to the ground mesh) to reproject panoramic images in the immersive mode. The format is in JSON and thus more or less self-describing. Beware that it is not yet a standard GeoJSON (wait for itowns2!).

LIDAR Point Cloud

Pointclouds should be served as binary files. By default it should be organized as a series of N binary-encoded Float32 (hence bitsPerAttribute=32) without any header, where X,Y,Z is the 3D position and I is the lidar intensity or reflectance :

 X1,Y1,Z1,I1, X2,Y2,Z2,I2,..., XN,YN,ZN,IN.

To prevent precision issues due to float-encoding of large values, we use actually use offseted coordinates, with an offset attribute provided through the main API call with the pointCloud option object :

var pointCloud = { 
  offset : {x:650000,y:0,z:6860000},
  delta : 30,
  url : 'pointclouds/{images.YYMMDD}/{lod}/{id}.bin',
  bitsPerAttribute : 32,
  lods : ['LR','HR'],
  id : function() { return parseInt(10*this.images.seconds()); }
  // visible : false
};

The general idea is that the current Panoramic image determines the PointClouds to display. The example above is based on their acquisition times : images.seconds and images.YYMMDD. Panoramic timestamps are provided by the date attribute of their JSON metadata file. The process is to look at the acquisition time of the current panoramic in order to generate the urls of the corresponding Pointclouds with the following pattern pointclouds/{images.YYMMDD}/{lod}/{id}.bin, such as pointclouds/140616/HR/450608.bin. Identifiers {id} will be generated between id-delta and id+delta. visible:true actually make the point cloud layer visible (default is false).

  • {date} follows the formatting YYMMDD
  • {lod} is either LR (10% of the data) or HR (the remaining 90%)
  • {id} is a custom function call that computes an integer from the amount of seconds since the start of the day (indicated by the {images.date}), using a 100ms resolution, effectively grouping points by chunks of 0.1 seconds.
  • all custom Panoramic option attribute values and functions are available as images.XXX template patterns, such as images.YYMMDD and images.seconds here.

In the sample data, the 2 lidar files LR and HR for the same period of 100ms are used for a simple LOD approach.

More practically what happens is :

  • Look up the timestamp of the current panoramic ex: 12h31 on the 14th of june 2014
  • Derive the date and the number of seconds since the start of the day
  • Load a few seconds of PointClouds around this time, beginning by Low Resolution chunks : 140616/LR/450608.bin, 140616/LR/450609.bin, etc and then 140616/HR/450608.bin, 140616/HR/450609.bin...

Textured buildings

This technological preview uses a custom textured mesh format called B3D. 3DS tiles should also work. This will be heavily reworked in version 2.

API

To conclude, the intended use of the itowns API is along the following :

var itowns;
function initialize() {
  if (typeof allInitialized != 'undefined' && allInitialized) {   // Check that everything is loaded
     itowns = new API({
        images     : images,
        pointCloud : pointCloud,
        buildings  : { url: "Buildings3D/"},
        position   : { x:651182.91,y:39.6,z:6861343.03 }
     });
  } else {
      setTimeout(initialize, 150);
  }
}
initialize();

The basic idea is to poll the allInitialized variable to wait for the API initialization and then construct an API object with a combination of images, pointCloud and 3D buildings layers. The current implementation is limited to zero or one layer of each type. As its name suggests, position provides the starting viewing position.

API.js provides a limited number of methods to get and set parameters such as visibility and position and to register callbacks on specific events.

Contributing

The code base includes readers for other lidar file formats and other 3D model formats that are not available through the main API. As the development efforts are concentrating on the <github.com/itowns/itowns2> refactoring, their exposure through the API has been deprioritized. We will however consider any pull request on this topic.