Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compatibility with TX2 and Xavier? #197

Closed
LukasHedegaard opened this issue Jan 10, 2022 · 2 comments
Closed

Compatibility with TX2 and Xavier? #197

LukasHedegaard opened this issue Jan 10, 2022 · 2 comments
Labels
question Further information is requested

Comments

@LukasHedegaard
Copy link
Collaborator

Does the OpenDR library have a requirement for full compatibility with TX2 and Xavier?
For instance, the activity recognition dataset Kinetics has a dependency on av in most open-source dataloaders, as well as in the OpenDR implementataion. However av cannot be installed on TX2.

In this case, the question is whether the av library dependency should be removed completely (and the dataloader rewritten) or whether it is OK if some parts of the toolkit are useable with embedded hardware, while others (such as a dataloader for a dataset, that would never fit on TX2 disk space) are not?

@LukasHedegaard LukasHedegaard added the question Further information is requested label Jan 10, 2022
@passalis
Copy link
Collaborator

passalis commented Jan 10, 2022

@LukasHedegaard thanks for highlighting this.

I think that we do not need to make training tools fully available for TX-2 and Xavier. In most practical scenarios that I can think of right now (and we have described as supported use cases), only inference is going to be needed on these platforms (apart from very specific tools, where part of the training pipeline might be needed - most probably only face recognition, where we might want to re-build the database). In any case, these devices are designed and optimized for inference and they are too weak to be used for any practical training using large-scale datasets (that do not even fit in their disk).

So, having inference working correctly on these devices is indeed critical. Having training working is an added bonus (e.g., for niche applications), but most probably not required for any practical application.

I think a critical question here is how to correctly separate inference-only dependence on embedded devices from generic training/inference dependencies. In #195 the requirements are manually installed, but perhaps we can find a way to automate this by adding some annotations or having separate dependecies.ini files/etc.

@passalis
Copy link
Collaborator

passalis commented Jul 3, 2023

Separate images are now provided in dockerhub for different embedded platforms.

@passalis passalis closed this as completed Jul 3, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants