Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
__pycache__/
*.py[cod]
*$py.class
*.egg-info/
build
*.jbenc
13 changes: 13 additions & 0 deletions LICENSE
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
Copyright 2025 Scale AI, Inc.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
64 changes: 62 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,2 +1,62 @@
# sensor-fusion-io-py
Open source sensor-fusion-io-py package
# sensor-fusion-io

SDK for working with sensor fusion scenes

## Deployment

Bumping the version at packages/sensor-fusion-io/py/pyproject.toml is required for the package to be published by CircleCI workflow. Otherwise, the publishing step will be successful but the upload itself will be skipped:

```
Publishing scale_sensor_fusion_io (0.4.8) to scale-pypi
- Uploading scale_sensor_fusion_io-0.4.8.tar.gz File exists. Skipping
```

### Publishing to scale-pypi

Handled by CICD

### Publishing to pypi

1. Create an account on [pypi](https://pypi.org/account/register/)
2. Ask to be added to this package project.
3. Create a token (see [here](https://packaging.python.org/en/latest/guides/distributing-packages-using-setuptools/#create-an-account) for more details)
4. Add the token info to $HOME/.pypirc
```
[pypi]
username = __token__
password = <actual token without quotes>
```

5. Install dependencies:
```
python3 -m pip install twine
python3 -m pip install build
```

6. Build dist:
```
python3 -m build --sdist
```

7. Upload to pypi:
```
twine upload dist/*
```

# FAQ

## Resulting scene file is too large

For scenes that span a large timeframe, the size of the resulting .sfs file may increase to multi-GBs. This is not ideal for loading onto LidarLite.

### Video encoding

One easy way to reduce scene size is to encode camera content as video, as the video content can be more easily compressed. The tradeoff is the potentially reduced quality of images, but for labeling 3D scenes, this is often sufficient.

See utils/video_helpers/ for helper functions

### Downsample point clouds

Another option is to downsample lidar point clouds. If your scene is used primarily for cuboid annotation, we recommend voxel downsampling using voxel sizes of at most 20mm.

A good heuristic for efficient loading and labeling is to have a scene contain no more than 100,000 points.
117 changes: 117 additions & 0 deletions docs/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,117 @@
# sensor-fusion-io

SDK for working with sensor fusion scenes

## Installation

```
pip install scale-sensor-fusion-io
```

## Requirements

Minimum Python version supported is:
- 3.10 for >= 0.5.0
- 3.8 for < 0.5.0

# Code samples

## End-to-End Example From PandaSet

[Here](examples/pandas_to_sfs_conversion.ipynb) is a link to a Jupyter notebook example.

## Constructing an SFS scene

### PosePath Dataframe

## Encoding a SFS scene

```
import scale_sensor_fusion_io as sfio
from scale_sensor_fusion_io.model_converters to_scene_spec_sfs
from scale_json_binary import write_file

scene = sfio.Scene() # your scene here
scene_sfs = to_scene_spec_sfs(scene)
import scale_sensor_fusion_io as sfio

write_file(f'~/scene.sfs', scene_sfs)


```

## Loading sfs file

There is a SFSLoader class that provides helper functions for loading a scene from url. There are few variations of loading function, depending on your use case.

```
from scale_sensor_fusion_io.loaders import SFSLoader
scene_url = "~/scene.sfs"

# Not recommended, but possible
raw_scene = SFSLoader(scene_url).load_unsafe() # scene is dict
sfs_scene = SFSLoader(scene_url).load_as_sfs() # scene is SFS.Scene
scene = SceneLoader(scene_url).load() # scene is models.Scene

```

## Validating SFS

Before you upload a scene for task creation, you'll want to validate that your sfs scene is well formed. You can do this in a variety of ways.

### Validating scene object

If you're working with the model.Scene object, you can use the `validate_scene` method available under scale_sensor_fusion_io.models.validation

```
import scale_sensor_fusion_io as sfio
import pprint

scene = sfio.Scene() #

errors = validate_scene(scene)
if errors:
pp.pprint(asdict(errors))
else:
print("Scene validated successfully")
```

### Validating from url

If you've already generated a .sfs file, you can also validate that it is well formed

```
from scale_sensor_fusion_io.validation import parse_and_validate_scene
from scale_json_binary import read_file
import pprint

pp = pprint.PrettyPrinter(depth=6)

scene_url = "your_scene.sfs"

raw_data = read_file(scene_url)
result = parse_and_validate_scene(raw_data)

if not result.success:
pp.pprint(asdict(result))
else:
print("Scene parsed and validated successfully")
```

# FAQ

## Resulting scene file is too large

For scenes that span a large timeframe, the size of the resulting .sfs file may increase to multi-GBs. This is not ideal for loading onto LidarLite.

### Video encoding

One easy way to reduce scene size is to encode camera content as video, as the video content can be more easily compressed. The tradeoff is the potentially reduced quality of images, but for labeling 3D scenes, this is often sufficient.

See .utils/generate_video.py for helper functions

### Downsample point clouds

Another option is to downsample lidar point clouds. If your scene is used primarily for cuboid annotation, we recommend voxel downsampling using voxel sizes of at most 20mm.

A good heuristic for efficient loading and labeling is to have a scene contain no more than 100,000 points.
Loading