Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fluid.ampslice~ doesn't need a @latency attrui #209

Open
rconstanzo opened this issue May 13, 2022 · 4 comments
Open

fluid.ampslice~ doesn't need a @latency attrui #209

rconstanzo opened this issue May 13, 2022 · 4 comments

Comments

@rconstanzo
Copy link

In writing a comment for #208 I noticed that fluid.ampslice~ has and reports latency, which as far as I know, it does not have, and can not make.

Is this a leftover from having a shared codebase with fluid.ampgate~ which can induce latency when using @lookahead or just to have parity with other RT objects (that use FFT)?

@jamesb93
Copy link
Member

jamesb93 commented Jun 6, 2022

I assume it is to do with the fact that any realtime objects expose a latency calculation, whether or not its 0 or not. It seems more effort than it is worth to modify the framework to remove information that is technically not untrue.

@jamesb93 jamesb93 closed this as completed Jun 6, 2022
@tremblap tremblap reopened this Jun 7, 2022
@tremblap
Copy link
Member

tremblap commented Jun 7, 2022

I think we should leave this open - there would be strange ways to explore to report a latency between the 2 ramps which needs investigating, to do with the fastest it could ever reach the threshold... it is messy though so should be a proposed improvement.

@rconstanzo
Copy link
Author

That would be quite cool, though at that point it turns into more of a "perceptual latency"-type thing, rather than a hard limit latency, which is what I understand the @latency thing to report. e.g. you may use the value you get from @latency to set as a latency compensation or pre-delay elsewhere, which would probably lead to unintended results.

@tremblap
Copy link
Member

tremblap commented Jun 8, 2022

there can't be a perceptual one - it is signal dependant. but we can try to guess the absolute minimum latency an optimal signal would take but that would change with threshold and envelop settings. quite a nightmare and not certain how usable that makes it, so we stick to 0 for now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants