Skip to content

Transformations

Manos Tsardoulias edited this page Feb 10, 2021 · 5 revisions

Streamsim has an inherent simple transformation library, via which one can get the absolute pose of anything at run time. Some restrictions are:

  • A robot can have devices that are not robots
  • A pan-tilt device can have devices that are not pan-tilts and are not robots
  • Currently all relative x, y poses are considered 0. Thus any device is considered to be in the center of its host.

Calls:

Calls

Device declaration:

This call is internally used in streamsim, you don't have to use it, ever!

The RPC call:

redis::RPCClient <streamsim.tf.declare>

Template input:

        {
            "type": "env",
            "subtype": "pan_tilt",
            "pose": {"x": 0, "y": 1, "theta": 0.4,
            "base_topic": "XXX",
            "name": "pt2"
        }

Get declarations

In case you want to get all device declarations, you can call:

redis::RPCService <streamsim.tf.get_declarations>

An example output is:

                   ...
                   'range': None,
                   'subtype': 'light',
                   'type': 'env'},
                  {'base_topic': 'world.office.actuator.env.thermostat.thermostat_X.d_14',
                   'fov': None,
                   'host': None,
                   'host_type': None,
                   'name': 'thermostat_X',
                   'pose': {'theta': 0.0, 'x': 100, 'y': 100},
                   'range': None,
                   'subtype': 'thermostat',
                   'type': 'env'},
                  {'base_topic': 'world.office.sensor.audio.microphone.microphone_X.d_15',
                   'fov': None,
                   'host': None,
                   'host_type': None,
                   'name': 'microphone_X',
                   'pose': {'theta': 0.0, 'x': 100, 'y': 100},
                   'range': None,
                   'subtype': 'microphone',
                   'type': 'env'},
                  {'base_topic': 'world.office.actuator.env.humidifier.hum_X.d_16',
                   'fov': None,
                   'host': None,
                   'host_type': None,
                   'name': 'hum_X',
                   'pose': {'theta': 0.0, 'x': 100, 'y': 100},
                   'range': None,
                   'subtype': 'humidifier',
                   'type': 'env'},
                  {'base_topic': None,
                   'fov': None,
                   'host': None,
                   'host_type': None,
                   'name': 'human_0',
                   'pose': {'theta': None, 'x': 120, 'y': 150},
                   ...

Get tf of a device

If you want to get the absolute pose (transformation) of any device, you can call the following RPC:

redis::RPCService <streamsim.tf.get_tf>

An example output is:

{'x': 6.0, 'y': 2.0, 'theta': -2.356194490192345}

Get affections of a sensor

If you want to check what devices or actors affect the sensor, based on their current pose and properties, call the following RPC:

redis::RPCService <streamsim.tf.get_affections

An example input is:

{
  'name': 'humidity_X'
}

and the respective output is

{
  "hum_X": {
    "type": "humidifier",
    "info": {
      "humidity": 50
    },
    "distance": 1.4142135623730951,
    "range": 5
  },
  "water_12": {
    "type": "water",
    "info": {
      "humidity": 100
    },
    "distance": 0,
    "range": 5
  }
}

Perform ideal sensor detections

Used to simulate detections, by avoiding the realistic operation of sensors, just returning the result if the sensors and algorithms operated flawlessly. The RPC is:

redis::RPCService <streamsim.tf.simulated_detection>

Example input is:

{
  'name': 'microphone_X', 
  'type': 'sound'
}
  • Detection types for microphone: sound, language, emotion, speech2text
  • Detection types for camera: face, qr, barcode, gender, age, motion, color, emotion

Example output for camera/color detection:

{
  "result": True,
  "info": { "r": 0, "g": 255, "b": 0 },
  "frm":
    {
      "type": "color",
      "info": { "r": 0, "g": 255, "b": 0 },
      "distance": 2.5,
      "min_sensor_ang": 2.3617950976819686,
      "max_sensor_ang": 3.408992648878566,
      "actor_ang": 2.498091544796509,
    },
}

As evident, the result contains three items:

  • result: True or False
  • info: The related information to the specific type of detection (e.g. the color for color detection)
  • frm: All available information from which the detection was performed