Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide the DeviceOrientation sensor #170

Closed
pozdnyakov opened this issue Feb 17, 2017 · 71 comments
Closed

Provide the DeviceOrientation sensor #170

pozdnyakov opened this issue Feb 17, 2017 · 71 comments
Assignees

Comments

@pozdnyakov
Copy link

At the moment developers experience problems when merging data from different motion sensors (which is required in most cases): sensor data updates come at the same time from HW but it is not quite obvious -- the user is dealing now with multiple sensor instances (gyro, accel, magnetometer), each with their onchange event.

So, it would be beneficial to have a new DeviceOrientation class providing the merged device orientation data in a convenient way with a single 'onchange' event. This was also discussed on a DAP call: https://lists.w3.org/Archives/Public/public-device-apis/2017Feb/0003.html

I would like to provide a solution for this and draft an API for such sensor. Please consider the following piece of IDL as a starting point for discussion:

interface DeviceOrientation : Sensor {
  /**
   * Populates the passed 16-element array with the orientation matrix
   */
   void getOrientation(Float32Array);
}
@pozdnyakov pozdnyakov self-assigned this Feb 17, 2017
@pozdnyakov
Copy link
Author

@kenchris @tobie @anssiko @alexshalamov wdyt?

@kenchris
Copy link
Contributor

I definitely think we should have a few higher level sensors like Orientation, but for efficiency sake it is probably best to just create a sensors exposing the data directly instead of having people pass in other sensors.

If we will have sensors integrated with Gamepad (like for WebVR) then that also makes more sense, for instance DaydreamController already exposed orientation as a fusion value as well as the raw gyro and accel values.

@tobie
Copy link
Member

tobie commented Feb 17, 2017

Absolutely. This has been the plan from day 1 and is why I'm pushing for a single motion sensor spec rather than separate specs for each combination of accelerometer, gyroscope and magnetometer.

@tobie
Copy link
Member

tobie commented Feb 20, 2017

WRT to the API proposal, you want to design a more generic BYOB solution, bake it in either the Sensor API itself or maybe just MotionSensor and handle batching (and a batchfull event?) while you're at it.

@pozdnyakov
Copy link
Author

It seems batching API is beyond the scope of this issue. I'm just thinking yet of the device orientation representation that will be most convenient to the user. There are few options like quaternion, rotation matrix or Euler angles, and we even could have several BYOB methods for different representations.

@tobie
Copy link
Member

tobie commented Feb 20, 2017

There are few options like quaternion, rotation matrix or Euler angles, and we even could have several BYOB methods for different representations.

Yes. You'll definitely want the different representations. Whether you obtain them through different constructors, different construction arguments, different method names or different method arguments, is still TBD. What's certain however is you want this defined consistently at the MotionSensor or Sensor level.

It seems batching API is beyond the scope of this issue.

On the contrary, I think the hard API to solve is the batching one. That's the one we need to look at first. It's easy to handle the BYOB for a single sample use case once we know where we're going with the more complex cases and create a consistant API that works across sensor types.

@pozdnyakov
Copy link
Author

On the contrary, I think the hard API to solve is the batching one. That's the one we need to look at first.

Agree, I just meant that batching API deserves a dedicated issue, or is #98 supposed track it?

@tobie
Copy link
Member

tobie commented Feb 20, 2017

Agree, I just meant that batching API deserves a dedicated issue, or is #98 supposed track it?

Oops. Sorry. :D

Yeah, I'm sort of tracking that in #98 for now. I guess I was waiting to get a better idea of the requirements before opening a dedicated issue, but feel free to open one now.

@pozdnyakov
Copy link
Author

Here is an updated IDL with a better description

[Constructor(optional SensorOptions sensorOptions)]
interface DeviceOrientationSensor : Sensor {
  /**
   * Populates the passed 4-element array with the device orientation data
   * represented as a unit quaterninon:
   * [cos(theta/2), Vx*sin(theta/2), Vy*sin(theta/2), Vz*sin(theta/2)], where
   *  Vx, Vy, Vz a coordinates of a unit vector representing the axe of rotation,
   *  and theta is the angle of rotation about the axe of rotation.
   *
   * The x,y, and z-axis are defined as following:
   * x-axis points east
   * y-axis points north
   * z-axis points up and is perpendicular to the ground
   *
   */
  void getQuaternion(Float32Array quaternion);

  /**
   * Populates the passed 16-element array with the 4*4 WebGL-compatible
   * rotation matrix, calculated from the contained quaternion.
   *
   * The given array is populated as shown below:
   * [ matrix[0],  matrix[1],  matrix[ 2],  0,
   *   matrix[4],  matrix[5],  matrix[ 6],  0,
   *   matrix[8],  matrix[9],  matrix[10],  0,
   *   0,              0,              0,   1]
   *
   */
   void getRotationMatrix(Float32Array matrix);   
}

@tobie
Copy link
Member

tobie commented Feb 24, 2017

So this API fills-in the buffer synchronously? While the batching API will do so async? Seems confusing. Do we really want to offer two different APIs for this? Won't we often be in a case where there might be more than one sample per AF? e.g. especially if a frame is delayed.

Providing quaternions doesn't seem to be specific to DeviceOrientation, though. It seems like a data representation strategy for a vector plus acceleration combo. Would be useful elsewhere too, e.g. for gyroscopes.

I need better clarity on which data representation is useful for which high and low level sensors, so we figure out how we organize this across all motion sensors in a consistant and coherent whole.

@kenchris
Copy link
Contributor

Those void getQuaternion(Float32Array quaternion); should probably be called populate, if they just returned an object, it should just be a property. Making sure these works nicely with the popular glMatrix library would be nice, but they seem to. I am a big fan on these basically just being arrays. It should also be possible to easily construct a DOMMatrix from them.

@kenchris
Copy link
Contributor

kenchris commented Feb 27, 2017

You could also look at the quartenion a + bi + cj + dk as a scalar + 3 dimensional vector, which seems to be what you do implicitly in the comment, though you don't explain that well.

z-axis points up and is perpendicular to the ground - and with ground you mean plane made up of x and y axis. - especially because ground will not be 100% stable and actually represent the actual ground.

@pozdnyakov
Copy link
Author

So this API fills-in the buffer synchronously? While the batching API will do so async? Seems confusing. Do we really want to offer two different APIs for this? Won't we often be in a case where there might be more than one sample per AF? e.g. especially if a frame is delayed.

IMO we need to split between "getCurrentValue" API returning the freshest data from the HW and batching API collecting the previous values with weaker latency requirements: these two kinds of APIs have to be both used and implemented in different ways.

Providing quaternions doesn't seem to be specific to DeviceOrientation, though. It seems like a data representation strategy for a vector plus acceleration combo. Would be useful elsewhere too, e.g. for gyroscopes.

On platform level (In Windows and Android) Sensor APIs use quaternions only to represent device orientation.

@tobie
Copy link
Member

tobie commented Feb 27, 2017

VR heavily relies on gyroscope data presented as quaternions. Worth digging into, imho.

@pozdnyakov
Copy link
Author

VR heavily relies on gyroscope data presented as quaternions. Worth digging into, imho.

@tobie so, are you considering moving quaternion attribute to a base class (e.g. MotionSensor)?

@kenchris
Copy link
Contributor

kenchris commented Feb 27, 2017

Quaternions are a very nice and compact way of representing rotations. Ie. you can multiply quaternions to apply rotation (though non-commutative - due to being rotations). They are basically the evolution of complex numbers (which are good for rotations in 2D space), and you can easily build a rotation matrix from them.

Good intro: https://www.youtube.com/playlist?list=PLpzmRsG7u_gr0FO12cBWj-15_e0yqQQ1U

Build rotation matrix from quaternions multiplication: http://www.euclideanspace.com/maths/geometry/rotations/conversions/quaternionToMatrix/

(the video above explains it quite well)

@pozdnyakov
Copy link
Author

I am a big fan on these basically just being arrays. It should also be possible to easily construct a DOMMatrix from them.

What do you mean by just being arrays (aren't they arrays at the moment)?
Thanks for you comments btw :-)

@kenchris
Copy link
Contributor

Yes I am a fan of you using Float32Array :)

@tobie
Copy link
Member

tobie commented Feb 27, 2017

Yes I am a fan of you using Float32Array :)

That's seems trendy, these days. It would be nice to have some rationale behind the trend, if only to be able to justify that decision.

@pozdnyakov
Copy link
Author

Quaternions are a very nice and compact way of representing rotations. Ie. you can multiply quaternions to apply rotation (though non-commutative - due to being rotations). They are basically the evolution of complex numbers (which are good for rotations in 2D space), and you can easily build a rotation matrix from them.

And they are also not susceptible to gimbal lock which is a big benefit

@tobie
Copy link
Member

tobie commented Feb 27, 2017

@tobie so, are you considering moving quaternion attribute to a base class (e.g. MotionSensor)?

As mentioned above, I'm not considering much right now beyond what's needed to provide a consistant and coherent API across sensors that's easy to use, yet allows for high performance, low latency, and limits GC cycles.

At this point, we don't even have a list of motion sensors we want to expose, we don't have a list of how we want to expose that data, we don't have clear requirements derived from the use cases, and we don't even have use cases written down, so I really don't have the info to answer that question.

@kenchris
Copy link
Contributor

kenchris commented Feb 27, 2017

Well, with arrays you can use methods of say glMatrix as the matrices were created using it.

ie

mat4.rotateZ(this.finalMatrix, sensor.getRotationMatrixAsFloat32Array(), window.screen.orientation.angle * degToRad);

@pozdnyakov
Copy link
Author

Well, with arrays you can use methods of say glMatrix as the arrays were created using it.

that was exactly the intention :)

@tobie
Copy link
Member

tobie commented Feb 27, 2017

Cool. Can we turn that into a use case and derive those as requirements? That would be pretty awesome.

@pozdnyakov
Copy link
Author

pozdnyakov commented Feb 28, 2017

Cool. Can we turn that into a use case and derive those as requirements? That would be pretty awesome.

Let the following to be the first draft:

Use cases

  1. A 3D compass web application monitor's the device's orientation and aligns the compass heading in a three-dimensional space.

  2. A web-based game uses the device's orientation relative to the Earth's surface as user input. For example, simulation of Ball-in-a-maze puzzle.

  3. A WebVR API polyfill implementation tracks the headset's (or mobile device's for mobile VR) orientation.

  4. A mapping web application orientates the 2D map with the orientation of the device.

Functional requirements

  1. The API must provide data that describes the device's physical orientation in a stationary 3D coordinate system.
  2. The API must provide data in representation that matches that of the WebGL interfaces and methods.
  3. The API must address the VR requirements for high performance and low motion-to-photon latency, and not introduce significant overhead over corresponding native platform APIs.
  4. The API must extend Generic Sensor and inherit its functionality and satisfy its requirements.

Non-functional requirements

  1. The API must provide developer friendly and consistent higher level abstractions optimised for the common use cases.

@tobie
Copy link
Member

tobie commented Feb 28, 2017

Oops, sorry. :-/ I didn't mean use cases in general. I meant specific use cases and rationale for presenting the data in array format (as that's a departure from platform conventions and I want us to document why we're doing it).

@kenchris
Copy link
Contributor

kenchris commented Feb 28, 2017

  • avoid unnecessary object construction and wrapping of objects -> performance/memory overhead
  • matrices/vectors etc are basically array formed data, and methods working on them, can all be adopted to work on arrays directly in a straight forward manner (matrices are laid out per row - only requirement). That is not the case for specific objects, where they must know the specifics of the object, like property names.
  • arrays can be modified easier by external tools, such as SIMD.js and Web Assembly in a straight forward manner.
  • new objects like DOMMatrix can easily be constructed from array data, and most likely stores the data in array form internally.
  • WebGL works on buffers (normals, vertices, colors, matrices, indices) all in array form (Float32Array by default, though UInt16Array for index buffers)
  • As the arrays are typed arrays, data can easily be transmitted over Web USB or Web Bluetooth, which is handly for implementing external sensors, such as an OrientationSensor for say a Daydream controller (using Web Bluetooth: https://www.youtube.com/watch?v=qkvHJf8N7W0)

@pozdnyakov
Copy link
Author

The idea is to match WebGL, where Float32Array is one of the most commonly used data types.

@tobie
Copy link
Member

tobie commented Feb 28, 2017

Vectors and quaternions are easy

Well, are they really?

There's a different cost to each API:

sensor.quaternion; // returns TypedArray 

benefit: simpler than BYOB, plays well with WebGL
cost: GC

sensor.acceleration, sensor.x, sensor.y, sensor.z  ; // returns floats corresponding to the quaternion values

benefit: simpler than BYOB, GC
cost: doesn't play well with WebGL

sensor.buffer = buffer;
requestAnimationFrame(_ => buffer); // returns TypedArray

benefit: GC, plays well with WebGL
cost: dev complexity, especially for simple cases

Not to mention the array-like behavior you were describing (no idea whether we could pull this off or even if it's desirable).

@kenchris
Copy link
Contributor

kenchris commented Feb 28, 2017

I meant they are easy to represent as an array :-) [scalar, vector(x, y, z)]

Maybe we can combine these above solutions somehow.

Though that the second solution seems easy to grasp and use, is it really that must easier when put to actual use? I mean you most often have to do either

  • pass filters - like low pass filter
  • normalize, do cross product, combine with other rotation

That is why I was thinking about exposing, x, y, z etc as properties, plus implement [Symbol.iterator] or something similar to easily get the data in array form. (that is in addition to have the populate methods)

@tobie
Copy link
Member

tobie commented Mar 1, 2017

That is why I was thinking about exposing, x, y, z etc as properties, plus implement [Symbol.iterator] or something similar to easily get the data in array form. (that is in addition to have the populate methods)

We need to consider the perf impact of this.

@pozdnyakov
Copy link
Author

That is why I was thinking about exposing, x, y, z etc as properties, plus implement [Symbol.iterator] or something similar to easily get the data in array form. (that is in addition to have the populate methods)

Could you add an example of client code for this?

@kenchris
Copy link
Contributor

kenchris commented Mar 1, 2017

Sure, this was just an idea and there might be other similar hooks. This example is a bit stupid how shows how you could do in JS:

const reading = {
  x: 76, 
  y: -22, 
  z: 36,

  [Symbol.iterator]: function*() {
    for (const i of ["x", "y", "z"]) {
      yield this[i];
    }
  }
};
let arr = [...reading];

-> [76, -22, 36]

Maybe this could be made quite fast in native code. Also an async version of this is scheduled for ES2018: https://github.com/tc39/proposal-async-iteration

@pozdnyakov
Copy link
Author

I'm just not sure how all above helps comparing to void populateQuaternion(Float32Array quaternion); When it is more convenient?

AFAIK splitting scalar and vector parts will not be beneficial for representing the rotation quaternion (q) itself. This splitting is needed to represent as quaternion a 3-D vector that will be rotated: p = (0 + px*i + py*j + pz*k), then we'll have p' = q*p*(q^-1). Am I missing something?

Also I'd like to add that the BYOB semantics will allow overloading, i.e. it could accept also 3*3 matrices and different data types (Float64Array, DOMMatrix, ..)

@kenchris
Copy link
Contributor

kenchris commented Mar 1, 2017

Yes, it would not help populate, but tobie seemed to think that populate was a bit too advanced for some users and I though we might have multiple ways to access the data then, a bit like the Response object from fetch as a json() method.

@pozdnyakov
Copy link
Author

Yes, it would not help populate, but tobie seemed to think that populate was a bit too advanced for some users

I believe a person who understands quaternions and rotation matrices should not experience any problems with understanding BYOB :-)

@kenchris
Copy link
Contributor

kenchris commented Mar 1, 2017

I believe a person who understands quaternions and rotation matrices should not experience any problems with understanding BYOB :-)

I think it was especially not for those who knew that :-) but simple use-cases. I am just not sure what these simple use-cases are.

@alexshalamov
Copy link

For basic use-cases we can add predefined set of orientations @pozdnyakov wdyt?

enum Orientation {
 "Portrait",
 "PortraitFlipped",
 "Landscape",
 "LandscapeFlipped",
 ....
};

interface DeviceOrientation : Sensor {
   // 4x4 rotation matrix
   void populateRotationMatrix(Float32Array);

   // populates quaternion [cos(θ/2), x*sin(θ/2), y*sin(θ/2), z*sin(θ/2)]
   void populateQuaternion(Float32Array);

   // For basic use-cases
   readonly attribute Orientation orientation;
};

@kenchris
Copy link
Contributor

kenchris commented Mar 1, 2017

People use device orientation as a controller etc, screen orientation is already handled by window.screen.orientation plus that can be locked by the OS. Also things like your PortraitFlipped are pretty much standardized as portrait-secondary elsewhere in the platform (ie window.screen, manifest etc)

@alexshalamov
Copy link

@kenchris https://www.w3.org/TR/screen-orientation/ is for screen orientation, that can be locked, etc. How someone would implement, for example, silencing WebRTC call by laying down device on the table facing screen down (Z towards facing towards ground)? With screen orientation API this would be impossible. Using rotation matrices is overkill for such simple use-case, thus, I think, some helper methods / constants might be useful.

@kenchris
Copy link
Contributor

kenchris commented Mar 1, 2017

Well wouldn't accelerometer not be better for that? Or a GravitySensor (isolated gravity from accelerometer) which Android supports

@tobie
Copy link
Member

tobie commented Mar 1, 2017

With screen orientation API this would be impossible. Using rotation matrices is overkill for such simple use-case, thus, I think, some helper methods / constants might be useful.

Whether or not that use case is actually a valid one—I'd probably argue higher-level events (e.g. oncallpaused) would make more sense, here—this is the very distinction I'm trying to make between high-level and low-level sensors. We should have separate constructors for these, not a huge kitchen-sink API.

There's also little sense for an API with such a coarse output to be able to specify input parameters like polling frequency with such precision. So maybe my initial plan to have MotionSensor extend Sensor is not the right one. Maybe we instead need to have HighFrequencySensor extend Sensor (or something like that)… and a related EventedSensor (barf for the placeholder name), for other, non high- frequency sensors.

@pozdnyakov
Copy link
Author

Here is the early (and uncompleted) version of spec draft https://pozdnyakov.github.io/orientation-sensor/

@tobie
Copy link
Member

tobie commented Mar 3, 2017

Oh, wow! I've been massively misled by your choice of name which made me think you wanted to emulate the API of the DeviceOrientation Event Specification when it turns out that what you were really after was simply a rotation vector. :-/

@tobie
Copy link
Member

tobie commented Mar 3, 2017

Is that rotation vector anchored in the geomagnetic north or not? Is it based on gyro data or the accelerometer? Should/can we provide all combinations of all of the above? Should these be different constructors? or options in the same constructor? Are 4x4 matrices just a different representation of the same data?

@pozdnyakov
Copy link
Author

Oh, wow! I've been massively misled by your choice of name which made me think you wanted to emulate the API of the DeviceOrientation Event Specification when it turns out that what you were really after was simply a rotation vector. :-/

I do not see how one contradicts to another: both DeviceOrintationEvent and rotation vector provide same device orientation data but just using different representations (Euler angles in one case and quaternion in the second case).

IMO DeviceOrientationSensor is quite an accurate name for the interface.

@pozdnyakov
Copy link
Author

Is that rotation vector anchored in the geomagnetic north or not?

Yes. Pls see #170 (comment)

Is it based on gyro data or the accelerometer?

Different options are possible (tagging @kenchris here and his https://github.com/01org/websensor-compass :) ), but fortunately data fusion is already done on platform level (at least on Android and Windows)

Should/can we provide all combinations of all of the above? Should these be different constructors? or options in the same constructor?

No. Let's just rely on platform API (where it is present).

Are 4x4 matrices just a different representation of the same data?

Yes.

@tobie
Copy link
Member

tobie commented Mar 3, 2017

My limited understanding is that Android provides both gyro only (not north-anchored) and accelerometer + magnetometer (north-anchored) and that VR needs the former for latency reasons.

@kenchris
Copy link
Contributor

kenchris commented Mar 3, 2017

If we call it DeviceOrientationSensor then it should be a fusion of accel and gyro, which most sensor hubs now can do in hardware and usually accomplish using a complimentary filter, as it creates satisfactory results for common use-cases without must computation.

A fusion of magnetomer and gravity (gravity isolated from accelerometer - also sometimes done in the sensor hub using a low pass filter - Android exposes a specific Gravity sensor for that) should be called CompassSensor or similar.

We could also make it an argument to a generic orientation sensor, like say { type: "compass" | "device" | "gamepad" | "vr-headset" ... }

Sometimes for gamepads, you actually might not want the orientation to always be 100% fixed in one direction, but instead slowly move towards you current position, - a fusion of gyro and accel will usually do that unless the fusion is done 100% precise to avoid that (ie choosing the right constants). I think I saw that Android exposes both, but I will have to check.

@tobie
Copy link
Member

tobie commented Mar 3, 2017

DeviceOrientationSensor is a terrible name given there's already a DeviceOrientation Event on the platform which everyone hates. Just imaging googling for info on this or searching stackoverflow.

@kenchris my plan is to make a matrix of all of the different motion-based sensors, showing the data they provide, on which of the three primitive sensors they are based, their purpose, their advantages and disadvantages, and the platforms that support them. Basically, an improved version of this: https://github.com/w3c/sensors/blob/master/sensor-types.md

Ideally, we'll stick all of this data (minus the platform support bit, obviously) at the beginning of the motion sensor spec.

@kenchris
Copy link
Contributor

kenchris commented Mar 3, 2017

The Android docs already have some sort of that

@tobie
Copy link
Member

tobie commented Mar 3, 2017

Yup, used some of it for the above matrix.

@tobie
Copy link
Member

tobie commented Mar 3, 2017

We could also make it an argument to a generic orientation sensor, like say { type: "compass" | "device" | "gamepad" | "vr-headset" ... }

What's interesting is that the above four have massively different latency, accuracy and energy consumption characteristics and don't rely on the same underlying HW sensors.

The question is how should we expose that to the API consumer?

Basically, how do we strike the right balance between providing the right underlying primitives and making it easy for developers to use the APIs.

In the spirit of the extensible web manifesto, we'd lay bare the underlying structures on which these sensors are based, leaving it to JS libraries to offer more user-friendly domain-specific APIs. And, then only, pave the cowpath.

@kenchris
Copy link
Contributor

kenchris commented Mar 3, 2017

From a user point of view (user being front end dev or the like) - they will mostly care about the actual orientation values and know that the latency will be different between something connected and something external, but it is not like that they will drop using a headset because of latency issues, if what they care about is the headset orientation :-)

It is all about balance. If we say that these are super low level APIs and that libs need to be built around them to make the APIs consumable, well then we can very well get away with only having populate methods and nothing for the simple case, which you were arguing for :-)

But then again, I don't think having a low levelish API means it is mutual exclusive that the API can be nice and that something has to be build around it.

@tobie
Copy link
Member

tobie commented Mar 3, 2017

From a user point of view (user being front end dev or the like) - they will mostly care about the actual orientation values and know that the latency will be different between something connected and something external, but it is not like that they will drop using a headset because of latency issues, if what they care about is the headset orientation :-)

My understanding is key players in the industry are are worried about VR-induced motion sickness and thus would probably want to restrict usage for devices which don't meet certain latency requirements (motion-sickness correlates closely with latency).

The game industry is accustomed to running a bunch of tests to figure out if HW requirements for particular settings are met. I imagine the same will be done for VR.

It is all about balance. If we say that these are super low level APIs and that libs need to be built around them to make the APIs consumable, well then we can very well get away with only having populate methods and nothing for the simple case, which you were arguing for :-)

I wasn't expecting the BYOB API to be synchronous, as unless the implementation is willing to always store all samples between two frames, then this API doesn't work for more than 1 samples (which is clearly a requirement).

Arguably we can have two BYOB APIs, but it might not be super API-consumer friendly to have one such API be sync (for 1 sample) and the other one async (for > 1 sample). But maybe it would be OK. I don't know.

If we go for a sync BYOB, we could also imagine the Typed Array as argument would be optional, so you could do:

[Constructor(...)]
interface VectorRotation : Sensor {
   Float32Array? read(optional Float32Array buffer);
};

Where read would either fill the the buffer you passed it or create one on the fly, e.g.:

let s = new VectorRotation({
    format: "quaternions",
    pollingFrequency: 120
});
s.start();
let reading = s.read(); // null
// once sensor it activated
reading = s.read(); // Float32Array of length 5? we need a timestamp somewhere :-/
let buffer = new Float32Array(VectorRotation.QUATERNION_SIZE);
s.read(buffer); // buffer is now filled up with one reading.
// etc.

For > than 1 sample, we could imagine setting the buffer size in the constructor,

let s = new VectorRotation({
    format: "quaternions",
    pollingFrequency: 120,
    bufferSize: 2 * VectorRotation.QUATERNION_SIZE
});

or through a dedicated method:

let s = new VectorRotation({
    format: "quaternions",
    pollingFrequency: 120
});
s.bufferSize = 2 * VectorRotation.QUATERNION_SIZE;

or even by passing a buffer upfront that's always overwritten (similar to the async version I had in mind):

let buffer = Float32Array(4 * VectorRotation.QUATERNION_SIZE);
let s = new VectorRotation({
    format: "quaternions",
    pollingFrequency: 120,
    buffer: buffer
});
buffer === s.read(); // true

There's a lot explore here, though. And some pretty important issues to consider, e.g. what happens when pollingFrequency < animation frame rate? Or when when frames slow down so you buffer gets more samples than it can handle? Where do you put timestamps? etc.

Half of the above are probably terrible ideas, but this area is certainly worth digging into seriously.

There's the potential to make something both powerful and user-friendly but also to screw it up badly and ship something that's over-engineered and crappy. (Which btw, I'm not suggesting any of the proposals are. It's just a general concern, not a specific one.)

@anssiko
Copy link
Member

anssiko commented Mar 6, 2017

Editor's Draft now hosted at: https://w3c.github.io/orientation-sensor/

I'd suggest we split this issue (now at 71 comments) into smaller self-container issues in https://github.com/w3c/orientation-sensor/issues

@pozdnyakov @kenchris @tobie can you help migrate the issues you care about to the w3c/orientation-sensor repo, and when done signal so we can close this mega issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants