Skip to content

Releases: Visual-Behavior/aloception-oss

v0.6.0beta

06 Apr 13:33
Compare
Choose a tag to compare
v0.6.0beta Pre-release
Pre-release

What's Changed

This release comes with a Docker image based on PyTorch v1.13.1 and pytorch_lightning 1.9.3. The image is available on Docker Hub at the following link: visualbehaviorofficial/aloception-oss:cuda-11.3-pytorch1.13.1-lightning1.9.3

Docker + version + pip install

  • The Docker image comes with a default aloception user.
  • You can now check the version of the package you are using with aloscene.__version__, alonet.__version__, and alodataset.__version__. All of them are currently linked to the same version, v0.6.0beta.
  • You can install the packages from pip.

Docker with Aloception user

When running the new Docker image, it is recomended to map your Home within the docker image

-e LOCAL_USER_ID=$(id -u) /home/YOUR_USER/:/home/aloception/

Pip Install

The setup.py is working now. If you are not planning to change or update Aloception, you can install it from Git using the following command from the Docker (not pre-installed by default):

pip install git+https://github.com/Visual-Behavior/aloception-oss.git@v0.6.0beta

If you are planning to change Aloception, you can install it from the aloception-oss folder with the following command:

pip install -e .

Features & fix

  • Fix bug 1 : MetricsCallback and run_pl_training
  • on_train_batch_end hook doesnt require dataloader_idx now.
  • FitLoop object of pytorch-lightning doesnt have public property should_accumulate since version 1.5.0
  • run_pl_training: pytorch-lightning changes the initialization method of Trainer, especially for multi-gpu training.

  • New feature 1 : Structure directory for logging and checkpoint during training
  • New feature 2: Since now, there is a config file alonet_config.json created in ~/.aloception which defines the default directory to save log and checkpoint during training. If the file does not exist, user can create it during the first training.
  • New feature 3: We can also have use the different path for logging and checkpoint as in alonet_config.json by using --log_save_dir path_to_dir and --cp_save_dir path_to_dir.

Fix unit test : Mostly removed warning & put back oriented boxes2D with cuda (Now automaticly built into the docker)
Fix setup.py.


General description of your pull request with the list of new features and/or bugs.

  • Fix bug X : fix ZeroDivision error in metrics .

  • New feature : add precision and recall.

  • Fix bug X : depth.encode_absolute has bug in dimension in torch1.13. #337

  • How to fix: remove unsqueeze in encode_abosolute
  • Result after fixing
>>> from aloscene import Depth
>>> import torch
>>> depth = Depth(torch.zeros((1, 128, 128)), is_absolute=False)
>>> depth.encode_absolute()
tensor(
	scale=1
	shift=0
	distortion=1.0
	is_planar=True
	is_absolute=True
	projection=pinhole
	[[[100000000., 100000000., 100000000.,  ..., 100000000.,
          100000000., 100000000.],
         [100000000., 100000000., 100000000.,  ..., 100000000.,
          100000000., 100000000.],
         [100000000., 100000000., 100000000.,  ..., 100000000.,
          100000000., 100000000.],
         ...,
         [100000000., 100000000., 100000000.,  ..., 100000000.,
          100000000., 100000000.],
         [100000000., 100000000., 100000000.,  ..., 100000000.,
          100000000., 100000000.],
         [100000000., 100000000., 100000000.,  ..., 100000000.,
          100000000., 100000000.]]])
>>> depth.encode_absolute(keep_negative=True)
tensor(
	scale=1
	shift=0
	distortion=1.0
	is_planar=True
	is_absolute=True
	projection=pinhole
	[[[100000000., 100000000., 100000000.,  ..., 100000000.,
          100000000., 100000000.],
         [100000000., 100000000., 100000000.,  ..., 100000000.,
          100000000., 100000000.],
         [100000000., 100000000., 100000000.,  ..., 100000000.,
          100000000., 100000000.],
         ...,
         [100000000., 100000000., 100000000.,  ..., 100000000.,
          100000000., 100000000.],
         [100000000., 100000000., 100000000.,  ..., 100000000.,
          100000000., 100000000.],
         [100000000., 100000000., 100000000.,  ..., 100000000.,
          100000000., 100000000.]]])
>>> 


Introducing base classes for datamodules and train pipelines (inspired by BaseDataset class).
@thibo73800

  • New feature 1 : BaseDataModule class
    My motivation for this class is that I kept reusing code solutions from other projects, such as the arguments, the aug/no aug train_transform structure, etc. This created quite a bit of Ctrl+C/Ctrl+V which is undesirable. My view for this class is that in the future, when creating a DataModule for a project, we inherit from the BaseDataModule class and implement only the transforms and the setup. It acts as a wrapper to the Pytorch Lightning Datamodule class, to provide all aloception users with a common code base.

  • New feature 2 : BaseLightningModule

Same motivation, but for training pipelines. This time, the often-reused bits are the arguments again, the optimizers, the run functions, etc. When inheriting, the user needs to implement the model and the criterion. The user is of course free to write its own functions in the child class for more complex cases


Logs

Full Changelog: v0.5.1...v0.6.0beta

v0.5.1

13 Mar 15:07
a42ffd1
Compare
Choose a tag to compare

What's Changed

  • Fix bug 1 : MetricsCallback and run_pl_training
  • on_train_batch_end hook doesnt require dataloader_idx now.
  • FitLoop object of pytorch-lightning doesnt have public property should_accumulate since version 1.5.0
  • run_pl_training: pytorch-lightning changes the initialization method of Trainer, especially for multi-gpu training.

v0.5.0

16 Feb 16:36
033f45c
Compare
Choose a tag to compare
  • Fix bug #243 :
    AugTensors can be called without logging the Userwarning

Add the 'append_labels' method to 'BoundingBoxes3D'.

  • New feature 1 : Description
box3d = BoundingBoxes3D([[10, 10, 400, 80, 46, 18, 1]])
box3d.append_labels(label)

When labels names exist, display it next to the 2D bounding box instead of displaying IDs.


  • New feature : #249 Dataset from directory iterator. Main use : calibrating TRT engines.
path1 = "PATH/TO/DIR/WITH/IMAGES1"
path2 = "PATH/TO/DIR/WITH/IMAGES2"

dataset = FromDirectoryDataset(dirs={"right": [path1, path1], "left": [path2, path2]}, slice=[0.2, 0.3])

# Will return dictionary  {"right": img1, "left": img2}
sample1 = dataset[0]

  • Reduce the memory size required to export and run TRT engines.
  • Raise runtime error when a precision is not optimized in a device.

  • Fix bug : #15
>>frame = aloscene.Frame(np.random.uniform(0, 1, (3, 50, 100)), names=("C", "H", "W"))
>>frame = aloscene.Frame.batch_list([frame, frame.clone()])
>>frame = frame.temporal()
>>print(f"names: {frame.names}\nshape: {frame.shape}")
names: ('B', 'T', 'C', 'H', 'W')
shape: torch.Size([2, 1, 3, 50, 100])

  • New feature 1 : Sampler for train loader can be constructed before calling the method
sampler = torch.utils.data.RandomSampler(dataset, replacement=True)
loader = dataset.train_loader(sampler=sampler)
  • New feature 2 : Sampler kwargs can be given to train_loader method:
sampler = torch.utils.data.RandomSampler
loader = dataset.train_loader(sampler=sampler, sampler_kwargs={"replacement":True})

  • Check if the requested normalization is supported
  • Set the mean_std property to resnet_rgb_mean_std at instantiation when using normalization="resnet"

  • New feature 1 :
    When manipulating aug tensor, _saved_names can accumulate None values which could prevent proper concatenation. Note that this might not be an issue once we have a good merging policy of different properties within aug tensors.

  • ***Fix bug #271 *** : Duplicated function
  • Remove alonet.common.weights.vb_fodler
  • add create_if_not_found option to alonet.common.pl_helpers.vb_folder: if .aloception does not exist in home, mkdir is called.

  • Used the keyword arguments **kwargs to allow using different padding_mode & fill values
  • Updated the docstring

Closes #22


This PR improves the black square title displayed on views.
Issue: #12

  • Add parameter to activate/deactivate title (default to True)
    Test code (with add_title=False):
from aloscene import Frame
from alodataset import CocoBaseDataset

coco_dataset = CocoBaseDataset(sample=False, img_folder="test2017")
#checking if regular getitem works
stuff=coco_dataset.getitem(0)
stuff.get_view(add_title=False).render()

#check if dataloader works
for f, frames in enumerate(coco_dataset.train_loader(batch_size=2)):
    frames = Frame.batch_list(frames)
    frames.get_view().render()
    if f > 1:
        break
  • Improve title rendering: Use existing put_adapative_cv2_text() to add title in Renderer class. Improve the function to adapt text size to frame size. Decrease text size if text is too long.
    Test code:
from aloscene import Frame
from alodataset import CocoBaseDataset

coco_dataset = CocoBaseDataset(sample=False, img_folder="test2017")
#checking if regular getitem works
stuff=coco_dataset.getitem(0)
stuff.get_view(add_title=True).render()

#check if dataloader works
for f, frames in enumerate(coco_dataset.train_loader(batch_size=2)):
    frames = Frame.batch_list(frames)
    frames.get_view().render()
    if f > 1:
        break

General description of your pull request with the list of new features and/or bugs.

  • New feature 1 : Support for pytorch 1.13

__torch_function__ was about to not be supported anymore. Switching to classmethod was required. The current implementation seem to still be compatible with pytorch 1.10. Note that this change is touching to the most important/breakable part of the aug tensor pipeline.

Open discussion: Should be update the doc to make pytorch 1.13 the default ? I think not before to check for pytorch lightning support.

By the way: c8ed7f6b1cdfeeec369447517af7321349df1e25


All the named labels are displayed next to BoundingBoxes2D

  • Labels BoundingBoxes2D : Render Labels next to BoundingBoxes2D FIX #221
import numpy as np
from aloscene import Frame, BoundingBoxes2D, Labels

frame = Frame(np.zeros((3, 100, 500)), normalization="01")
label = Labels([0, 1, 0])
label2 = Labels([0, 0, 1], labels_names=["red", "green"])
box = BoundingBoxes2D([[100, 20, 400, 80], [200, 40, 400, 80], [100, 40, 300, 80]], boxes_format="xyxy", absolute=True, frame_size=frame.HW)

box.append_labels(label, name="label")
box.append_labels(label2, name="label2")

frame.append_boxes2d(box)

print(box)
frame.get_view().render()

  • First addition of issue #23 : being able to pad the tensor to the next multiple of multiple
  • unittest

Minimal example

>>> x = aloscene.Frame(torch.rand(1, 10, 10), names=('C', 'H', 'W'))
>>> x.pad(multiple=8).shape
torch.Size([1, 16, 16])
>>> x.pad(multiple=10).shape
torch.Size([1, 10, 10])
>>> x.pad(multiple=32).shape
torch.Size([1, 32, 32])

Fix the merging of tensor to allow torch.cat to accept a tuple of AugmentedTensor.


  • New feature : #260
# While overriding the exporter class
def __init__(dynamic: bool = False, **kwargs):
     if dynamic:
            # Keys are inputs names. Lists are indexes of axes that we want to set as dynamic.
            self.dynamic_axes = {"input1": [1, 2, 3], "input2":[1, 2, 3]}
    #....

  • New feature #269 : Flexible onnx version

General description of your pull request with the list of new features and/or bugs.

  • ***New feature 1 #85 *** : Now we can load a model directly from run_id without passing load_training and Lightning module.
  • weights has highest priority. if weights is None, load model from run_id is used.
  • We can choose to load best checkpoint or last checkpoint.
from alonet.common.weights import load_weights
from alonet.detr import DetrR50

model = DetrR50(num_classes=91, weights="detr-r50")

# load from downloaded weight
load_weights(model, weights="detr-r50")

# load from .pth file
load_weights(model, weights="~/.aloception/weights/detr-r50/detr-r50.pth")


# load from run_id
load_weights(model, run_id="your_run_id_heer", project_run_id="your_project_run_id",)

  • New feature 1 : Rotate frame around a custom center

Torchvision Rotate transform supports the possibility to pass a "center" argument to rotate around a given center (and not only around the image center). I added this functionality to our Rotate Alotransform

from alodataset.coco_base_dataset import CocoBaseDataset
from alodataset.transforms import Rotate

coco_dataset = CocoBaseDataset(sample=True)
x = coco_dataset[0]
angle = 15.0
x = Rotate(angle, center=[650, 0])(x)
x.get_view().render()

  1. fix getitem on augmentedTensor with augmented tensor as mask.
  2. reset name only if tensor aren't linearized (like in bbox unit test) else declass to classic tensor

  • Fix bug 309 : Wrong display of 3d boxes on padded images
    The error was in the camera_calib code where two variables were misplaced.

New feature 1 : Added wandb hyperparameters logging
Now the hyperparameters are logged by default in wandb. The config of the experiment can be viewed in wandb=>overwiev=>config (see image below)


  • Mean_std_norm no more uses resnet normalization and can be used for custom normalization :

Before, mean_std_norm used _resnet_mean_std by default, therefore you could only use resnet normalization. Now you can use any custom normalization.

import torch
import aloscene

x=torch.rand(3,600,600)
x=aloscene.Frame(x,mean_std=((0.333,0.333,0.333),(0.333,0.333,0.333)))
print("normalization de x ",x.normalization)
print("Mean_std de x",x.mean_std)

x=x.mean_std_norm(mean=(0.440,0.220,0.880), std=(0.333,0.333,0.333), name="my_norm")
print("normalization de x ",x.normalization)
print("Mean_std de x",x.mean_std)

Output :

normalization de x  255
Mean_std de x ((0.333, 0.333, 0.333), (0.333, 0.333, 0.333))
normalization de x  my_norm
Mean_std de x ((0.44, 0.22, 0.88), (0.333, 0.333, 0.333))
  • Conversion from mean_std_norm to minmax_sym and from minmax_sym to mean_std_norm
    Added this conversion which raised an Exception before
import torch
import aloscene

x=torch.rand(3,600,600)
x=aloscene.Frame(x,mean_std=((0.333,0.333,0.333),(0.333,0.333,0.333)))
x = x.norm_minmax_sym()
print("normalization de x ",x.normalization)
print("Mean_std de x",x.mean_std)
x = x.mean_std_norm(mean=(0.333,0.333,0.333), std=(0.333,0.333,0.333), name="custom")  # Exception raised
print("normalization de x ",x.normalization)
print("Mean_std de x",x.mean_std)

Output :

normalization de x  minmax_sym
Mean_std de x None
normalization de x  custom
Mean_std de x ((0.333, 0.333, 0.333), (0.333, 0.333, 0.333))

General description of your pull request with the list of new features and/or bugs.

  • Docker : New docker image with pytorch 1.13 support & pytorch lightning 1.9

  • Changed back transfrom to p=1.0 : transformation used to be apply automaticly on all frames. Last month we introduced a new parameter to ran...

Read more

v0.4.0

16 Jan 08:52
2aef5ad
Compare
Choose a tag to compare

What's Changed


New feature : create_calibrator function that allows to create calibrator from name and kwargs is added to alonet.torch2trt to optimize imports in Tensorrt scripts.

from alonet.torch2trt import create_calibrator, DataBatchStreamer

cache_file = "calib.bin"
data_streamer = DataBatchStreamer(...)

calibrator = create_calibrator("minmax", data_streamer, cache_file)

by @Data-Iab in #211


  • Fix bug : aloscene.read_image is now supported in the Jetsonnx.

by @Data-Iab in #209


  • Fix bug : Change the way of representing an augmented tensor. Clearer separation between property with new separator and without unnecessary ones.

by @Ardorax in #218


  • Fix bug : The default value (10) for calibration batches does not allow the use of the whole calibration dataset. Default value has been changed to None.

by @Data-Iab in #216


  • Fix bug : Change all the links to the documentation due to the change of name of the repository.

by @Ardorax in #223


  • New feature : Added a setup.py to facilitate the installation of aloception.

by @tflahaul in #220


  • New feature : Kitti Dataset (Stereo, Flow, Scene Flow, Depth, Odometry, Object, Tracking, Road, Semantics)

Kitti Depth : How to use Kitti Depth

date = "2011_09_26"
idsOfDrives = [
    "0001",  # sample from training subset
    "0002",  # sample from validation subset
]
custom_drives = {date: idsOfDrives}
kitti_ds = KittiDepth(
    subset="all",
    return_depth=True,
    custom_drives=custom_drives,
)

for f, frames in enumerate(kitti_ds.train_loader(batch_size=2)):
    frames = Frame.batch_list(frames)

Kitti Semantic : The semantic class

dataset = KittiSemanticDataset()
obj = dataset.getitem(0)
obj.get_view().render()

How to use the remaining task of the dataset : Dataset's class list : KittiStereoFlow2012, KittiStereoFlowSFlow2015, KittiOdometryDataset, KittiObjectDataset, KittiTrackingDataset, KittiRoadDataset

dataset = DATASET_CLASS(right_frame=False)
obj = dataset.getitem(0)
obj["left"].get_view().render()

Scene Flow: dimensions : Error with shape of occlusion mask.

by @Ardorax in #213


  • Fix bug : Fix calibrator import issue. All TensorRT and prod package are now optional.

by @thibo73800 in #215


Support for the kumler_bauer projection in coords2rtheta

  • New feature : Add support for the kumler_bauer projection in coords2rtheta
coords2rtheta(..., distortion=(0.25, 0.45), projection="kumler_bauer")
coords2rtheta(..., distortion=0.25, projection="equidistant") # API doesn't change for other projections

by @tflahaul in #227


  • New feature : Add WoodScape dataset.

WooodScapeDataset.

from alodatset import WooodScapeDataset

woodscape =  WooodScapeDataset(
        labels=[],
        cameras=[],
        fragment=1.,
        )
frame = woodscape[222]
frame.get_view().render()

WooodScapeSplitDataset : WoodScape dataset with train and validation fractions.

from alodatset import WoodScapeSplitDataset, Split

woodscapeTrain =  WoodScapeSplitDataset(split=Split.TRAIN)
frame = woodscapeTrain[222]
frame.get_view().render()

by @Data-Iab in #226


  • Fix bug: kumler-bauer projection support for aloscene.Depth
  • as_planar, as_euclidean: Assert error because the missing of "kumler_bauer" in verification condition.
  • as_points3d: missing of the calculation for distorted focal length for kumler_bauer.

by @anhtu293 in #235


  • New feature : Better handle distortion coef for equidistant projection: both float and list are accepted.
# Python code snippet showing how to use it.
import torch
from aloscene import Depth

x = torch.rand(size=(1, 1, 20, 20))
depth1 = Depth(x, projection="equidistant", distortion=[0.5])
depth2 = Depth(x, projection="equidistant", distortion=0.5)

by @anhtu293 in #238


  • New feature : 3 different implementations of focus blur augmentation
from alodataset.transforms import RandomFocusBlur, RandomFocusBlurV2, RandomFocusBlurV3

import aloscene
import torch

frame = aloscene.Frame(torch.rand((3, 300, 300)))
blured_frame1 = RandomFocusBlur()(frame)
blured_frame2 = RandomFocusBlurV2()(frame)
blured_frame3 = RandomFocusBlurV3()(frame)
  • New feature : Motion blur augmentation from optical flow
## Motion blur from RAFT-flow
from alonet.raft.raft import RAFT

flow_model = RAFT(weights="raft-things")
flow_model = model.eval()

frame_t0_t1 = aloscene.Frame(torch.ones((2, 3, 300, 300)), names=tuple("TCHW"))
frame_t0 = frame_t0_t1[0]
frame_t1 = frame_t0_t1[1]

blured_t1 = RandomFlowMotionBlur(flow_model=flow_model)(frame_t1, p_frame=frame_t0)
blured_t1.get_view().render()

## Motion blur from ground truth optical flow
flow = aloscene.Flow(torch.ones((2, 300, 300)))
blured_t1 = RandomFlowMotionBlur()(frame_t1, flow=flow)
blured_t1.get_view().render()
  • New feature : Random corner masking augmentation
from alodataset.transforms import RandomCornersMask
import aloscene
import torch

frame = aloscene.Frame(torch.ones((3, 300, 300)))
randomly_masked_frame = RandomCornersMask()(frame)
  • Fix bug : CameraIntrinsic initialization with a shape of 4x4 is now possible using __init__.

by @Data-Iab in #314


  • Fix bug : Fix detr exportation to onnx & trt

by @Data-Iab in #315


  • New feature : Added title to frames displayed with get_view()

Added a "title" argument to the get_view() method to be able to directly input a title for your display.
frames.get_view(title="test").render()

by @Dee61298 in #310


Full Changelog: v0.3.0...v0.4.0

v0.3.0

16 Aug 06:00
e58b3e8
Compare
Choose a tag to compare

What's Changed

Features


The shortest distance between two points is a straight line. - Archimedes

As said Archimedes, knowing the distance (straight line) between camera and a point is as important as knowing planar depth. Therefore, it's convenient to have methods that can do the conversion between them

What's new ?

  • Handle negative points in encode_absolute: For wide range camera (FoV > 180), it's possible to have points whose planar depth is small than 0 (points behind camera). To keep these points instead of clipping by 0, pass keep_negative=True in argument.
  • Depth to distance as_distance(): Convert depth to distance. Only pinhole camera and linear equidistant camera are supported at this time.
  • Distance to depth as_depth(): Convert distance to depth. Only pinhole camera and linear equidistant camera are supported at this time.
  • Possible to create a tensor of Distance by passing is_distance=True at initialization.
  • Support functions in depth_utils.

Update

  • Change the term to avoid the confusion: "euclidean depth" for distance and "planar depth" for usual depth.
  • as_distance() becomes as_euclidean()
  • as_depth() becomes as_planar()

Archimedes's quote now becomes: The shortest "euclidean depth" between two points is a straight line.


New feature

  • Add projection and distortion as new properties of SpatialAugmentedTensor so that we can inherit for other types of tensor. Two projection models are supported: pinhole and equidistant. Default values are pinhole and 1.0 for distortion so it won't change anything for initialization if we are working on "pinhole" image. Only aloscene.Depth is supported for distortion and equidistant projection at this time.
  • Depth.as_points3d is now supported equidistant model with distortion. If no projection model and distortion are specified in arguments, as_points3d uses the projection and distortion property.
  • Depth.as_planar and Depth.as_euclidean now use projection and distortion property if there is no projection model and distortion specified in arguments.
  • Depth.__get_view__ now has color legend if legend is set True.

  • New 💥 :

    • TensorRt engines can now be built with int8 precision using Post Training Quantization.
    • 4 calibrators are available for quantization : MinMaxCalibrator, LegacyCalibrator, EntropyCalibrator and EntropyCalibrator2.
    • Added a QuantizedModel interface to convert model to quantized model for Training Aware Quantization.
  • Fixed 🔧 :

    • Adapt graph option is removed, we just adapt graph once it's exported from torch to ONNX.

New ⭐ :

  • profiling_verbosity option is added to the TRTEngineBuilder to better inspect the details of each node when calling the tensorrt.EngineInspector
  • Some quantization related arguments are added to the BaseTRTExporter.

  • RandomDownScale : transform to randomly downscale between original and a minimum frame size
  • RandomDownScaleCrop : a compose transform to randomly downscale then crop

  • add cuda shared memory for reccurent engines by @Data-Iab in #186

New 🌱

  • Engine's inputs/outputs can share the same space in GPU for faster execution. Hosts with shared memory can be retrieved with outputs_to_cpu argument and can be updated using inputs_to_gpu argument.

Dynamic Cropping

  • Possibility to crop an image to smaller fixed size image in the position we want. The crop position can be parsed by argument center which can be float or int.
  • If crop is out of image border, an error is triggered.

New ⭐ :

  • Depth evaluation metrics are added to alonet metrics.

  • CocoDetectionDataset can now use a given ann_file when loaded
  • CocoPanopticDataset can now use ignore_classes to ignore some classed when loading the panoptic anns
  • In DetrCriterion interpolation is an option that can be changed with upscale_interpolate
  • Lvis Dataset based on CocoDetectionDataset with a different ann file

  • allow user to complety specify the grid view by @jsalotti in #201
# Create three gray frames, display them on two rows (2 on first rows, 1 on 2nd row)
import numpy as np
import aloscene
arrays = [np.full((3, 600, 650), 100), np.full((3, 500, 500), 50), np.full((3, 500, 800), 200)]
frames = [aloscene.Frame(arr) for arr in arrays]
views = [[frames[0].get_view(), frames[1].get_view()], [frames[2].get_view()]]
aloscene.render(views, renderer="matplotlib")

Create scene flow by calling the class with a file path, a tensor or a ndarray.

If you have optical flow, depth at time T and T + 1 and the camera intrinsic. You can create scene flow with the class method from_optical_flow. It handle the creation of the occlusion mask if some parameters have one.


  • Github action who automatically launch unit test when there is a commit or pull request in master branch

  • Scene flow in frame by @Ardorax in #208
    New method 'append_scene_flow' in frame class.

Fix

  • GLOBAL_COLOR_SET_CLASS will automaticly adjust its size for giving random color for a given object class

New Contributors

Full Changelog: v0.2.1...v0.3.0

v0.2.1

15 Apr 11:00
7a5b346
Compare
Choose a tag to compare

What's Changed


  • fix tracing assertion by @Data-Iab in #166
    Check if tracing attribute exists before checking if it's set to True.

  • camera calib by @thibo73800 in #168
    Add new method for getting distance from one pose to an other
pose.distance_with(other_pos)

Set default names to extrinsic to (None, None)


  • Make inverse False by default when creating Depth tensor.
  • scale and shift are not required. They're optional.

Full Changelog: v0.2.0...v0.2.1

v0.2.0

12 Apr 09:55
8fc86bc
Compare
Choose a tag to compare

What's Changed



BaseTRTExporter can now be create from a None model. This is usefull if one want to only export from an onnx file.


Two methods added to Depth:

encode_inverse : invert depth with given scale and shift.
encode_absolute : undo encode_inverse changes.
One method added to AugTensor:

to_squeezed_numpy: as its name indicates, converts to squeezed numpy.


Fixe when using load_training without loading the common argparse, the no_run_id was used in load_training. I now use a default value if the value is not set into the args.


Ratio of width and height are not exact in the resize method of matrix intrinsic.


In this merge request:

It is now possible to do

tensor.temporal(dim=1) # where dim can be 0 or 1

and

tensor.batch(dim=1) # where dim can be 0 or 1

TODO: Check back unit test to check that everything is correct.


  • Noisy aug, pose update, depth abs, render, batch_list by @jsalotti in #165

Add a mergeable pose label to the Frame object.

It can be used as such

P = aloscene.Pose(cam_pos)

Pose directly inherit from CameraExtrinsic but usually refer to the global world coordinates/

Fix noisy pos to propagate the normalization and to use the device properly.

Add aloscene.render()

You can now directly render a list of view using aloscene.render()

aloscene.render(views)

Here is a example to add views and to record a video

views = []
# Run DFM on side cameras

for frames in data_loader:

    # Build a list of view
    for frames_side in frames:
        output = model.inference(model(frames))
        views.append(output.get_view())
    
    # render the list
    aloscene.render(views, record_file="model_outputs.mp4")

# Save the final video
aloscene.save_renderer()

batch list from aloscene

Instead of doing

SpatialAugmentedTensor.batch_list(tensors)

or

tensors[0].batch_list(tensors)

You can now do:

aloscene.batch_list(tensors)

Compute translation between two pose/extrinsic

ref.pose.translation_with(src.pose)

New Contributors

Full Changelog: v0.1.0...v0.2.0

v0.1.0

18 Mar 16:19
ea69d95
Compare
Choose a tag to compare

What's Changed


  • add skip_adapt_graph option to not adapt the graph before to export to TensorRT.
  • Fix issue when calling TRTExecutor() without engine, its now TRTExecutor(stream=cuda.Stream())
  • Automatically adapt graph by default: handle clip operations + simplify onnx graph. Is is not mandatory to override this method anymore. This change will not affect the current trt exporter since the adapt_graph method is supposed to be override.

Full Changelog: v0.0.1...v0.1.0

v0.0.1

10 Mar 09:25
cc3fc33
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: https://github.com/Visual-Behavior/aloception/commits/v0.0.1