Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feedback on new camera trigger API proposal (Chapter 2) #226

Closed
henrypinkard opened this issue Jul 27, 2022 · 9 comments
Closed

Feedback on new camera trigger API proposal (Chapter 2) #226

henrypinkard opened this issue Jul 27, 2022 · 9 comments

Comments

@henrypinkard
Copy link
Member

henrypinkard commented Jul 27, 2022

A draft of a new camera triggering API has been created here. This issue is for providing feedback on that proposal.

Will it map all types of cameras used in MM? Are there additions/changes that should be made?

@tomhanak
Copy link
Contributor

tomhanak commented Aug 5, 2022

I'm collecting a feedback from PVCAM adapter point of view. But before posting anything I'd like to clarify how exactly the new API will be integrated to existing code structure.

If I get it right, the new API calls will be added to CMMCore class and mirrored 1:1 in MM::Camera class. So all these new functions will be called by BeanShell scripts or MMCoreJ or plugins above it as nicely visualized by Mark here.
It means the device adapters won't call the new API. Instead, each adapter will have to override these new virtual functions from MM::Camera and "translate" it to e.g. PVCAM API calls...

For example:

You mentioned in Backwards compatibility chapter that CMMCore::stopSequenceAcquisition() will call camera.SendBurstEndTrigger() (instead of current camera.stopSequenceAcquisition()).
This is common approach when introducing new API, but it also means that all device adapters will have to support new API immediately. If not, the device adapters would have to report back to core whether they support new API or not, so the core could use the right path.

Or do you plan to provide default implementation for new functions in MM::Camera class that will call back the old functions? I.e. that MM::Camera::SendBurstEndTrigger() calls this->stopSequenceAcquisition()?

Would it be even possible to provide default implementations for more complex functions like SnapImage()?

@henrypinkard
Copy link
Member Author

Thanks @tomhanak !

if I get it right, the new API calls will be added to CMMCore class and mirrored 1:1 in MM::Camera class.

Yes on the mirroring 1:1, but the proposal I wrote is core additions to MM::Camera, and the Core additions will be based off that.

So all these new functions will be called by BeanShell scripts or MMCoreJ or plugins above it as nicely visualized by Mark here.
It means the device adapters won't call the new API. Instead, each adapter will have to override these new virtual functions from MM::Camera and "translate" it to e.g. PVCAM API calls...

Correct

You mentioned in Backwards compatibility chapter that CMMCore::stopSequenceAcquisition() will call camera.SendBurstEndTrigger() (instead of current camera.stopSequenceAcquisition()). This is common approach when introducing new API, but it also means that all device adapters will have to support new API immediately. If not, the device adapters would have to report back to core whether they support new API or not, so the core could use the right path.

Or do you plan to provide default implementation for new functions in MM::Camera class that will call back the old functions? I.e. that MM::Camera::SendBurstEndTrigger() calls this->stopSequenceAcquisition()?

Most likely we'll need to add something like a IsNewAPISupported() call to MM::Camera, because it would be very challenging to make all cameras immediately support the new API

Would it be even possible to provide default implementations for more complex functions like SnapImage()?

It would certainly be nice if it were possible, but this is something it would be great to get your feedback on to see if the proposal for how to do this actually makes sense.

@tomhanak
Copy link
Contributor

tomhanak commented Aug 8, 2022

How will the end user control the trigger settings? You have rejected new properties with standardized names that would show up in property browser, and I agree with the conclusion, but... Then there must be another GUI the user could use, probably on main window below binning and shutter settings? How would you present the options? Four combo-boxes with names from code? That would confuse everyone, though. One combo box with all 34=81 combinations? I doubt so...

The SendFrameStartTrigger() should not block, it's not consistent with other Send*Trigger() functions.
The old SnapImage() should call SendFrameStartTrigger() and then wait for something like ExposureEnd-callback (in PVCAM API it's BOF callback). GetImageBuffer() should then wait for ReadoutEnd-callback (in PVCAM API it's EOF callback).
To allow that, the new API needs something like WaitForExposureEnd() or WaitForExposureEndTrigger(), optionally with some timeout argument.
Unfortunately, newest Photometrics cameras are much faster thus BOF interrupts are not generated anymore and BOF callback is invoked right before EOF callback. So the SnapImage() blocks until exposure and readout are completed.

It is not explicitly mentioned anywhere, but I suppose any type of running acquisition/burst will be possible to abort by SendBurstEndTrigger(), even with BurstEndType=Internal. This would imply that BurstEndType=Software doesn't need to be set for live acquisitions running indefinitely in order to make them abortable.
With all Send*Trigger() functions being non-blocking, it should be finally possible to abort really any type of acquisition. Currently, the SnapImage() renders UI unresponsive until some long timeout is over.

I see again a statement "Camera is always ready for a burst" and I saw it also somewhere around SnapImage() function. PVCAM-based cameras are ready only after a pl_exp_setup_*() function call and become not-ready when any PVCAM parameter is changed. Because of that it seems PVCAM doesn't support BurstStart=Internal but we can intentionally misuse it. More on that follows.

I was unable to uniquely map PVCAM's pl_exp_setup_*() and pl_exp_start_*() to new API. Based on various settings it is not consistent how/when is the acquisition/burst started.
Mark mentioned that current device interface doesn't have the capability to separate "slow" acq. setup and "fast" acq. start. With PVCAM it is an incorrect assumption that the acq. start is fast. The pl_exp_setup_*() call configures the camera and tells the app the size of each frame (or full buffer size in case of a sequence acq. with finite number of frames). The application allocates some buffer of required size and passes it to pl_exp_start_*() that prepares DMA transfers, etc. That could be quite "slow" and the slower the bigger buffer is.
Because of that I would do both, acq. setup and start, in PrepareForBurst(n). A side effect would be that in some cases the SendBurstStartTrigger() does nothing.

Below are all PVCAM trigger modes, expected trigger settings and mapping to PVCAM functions.

A live acq. with EXT_TRIG_INTERNAL:

    SetBurstStartTriggerType(Internal); // Must be Internal, because PrepareForBurst(n) starts the acq.
    SetFrameStartTriggerType(Internal);
    SetExposureEndTriggerType(Internal); // Determined by SetExposure(double)
    SetBurstEndTriggerType(Internal); // Determined by numImages
    SetFrameExposureMode(Distinct);
    PrepareForBurst(-1); // pl_exp_setup_cont() + pl_exp_start_cont()
    // Receive internally triggered frames
    SendBurstEndTrigger(); // pl_exp_abort()

It could be theoretically configured exactly as shown on you first example diagram "Software triggered burst", but in this case the PrepareForBurst(n) would setup the PVCAM acq. only, while the acq. start is postponed to SendBurstStartTrigger(), which deviates from all other cases here:

    SetBurstStartTriggerType(Software);
    SetFrameStartTriggerType(Internal);
    SetExposureEndTriggerType(Internal); // Determined by SetExposure(double)
    SetBurstEndTriggerType(Internal); // Determined by numImages
    SetFrameExposureMode(Distinct);
    PrepareForBurst(-1); // pl_exp_setup_cont()
    SendBurstStartTrigger(); // pl_exp_start_cont()
    // Receive internally triggered frames
    SendBurstEndTrigger(); // pl_exp_abort()

A live acq. with EXT_TRIG_EDGE_RISING:

    SetBurstStartTriggerType(Software); // Could be also Internal
    SetFrameStartTriggerType(External);
    SetExposureEndTriggerType(Internal); // Determined by SetExposure(double)
    SetBurstEndTriggerType(Internal); // Determined by numImages
    SetFrameExposureMode(Distinct);
    PrepareForBurst(-1); // pl_exp_setup_cont() + pl_exp_start_cont()
    SendBurstStartTrigger(); // No-op if called
    // Receive TTL triggered frames
    SendBurstEndTrigger(); // pl_exp_abort()

A live acq. with EXT_TRIG_TRIG_FIRST:

    SetBurstStartTriggerType(External);
    SetFrameStartTriggerType(Internal);
    SetExposureEndTriggerType(Internal); // Determined by SetExposure(double)
    SetBurstEndTriggerType(Internal); // Determined by numImages
    SetFrameExposureMode(Distinct);
    PrepareForBurst(-1); // pl_exp_setup_cont() + pl_exp_start_cont()
    // Receive TTL triggered first frame and all others internally triggered frames
    SendBurstEndTrigger(); // pl_exp_abort()

A live acq. with EXT_TRIG_LEVEL or EXT_TRIG_LEVEL_OVERLAP:

    SetBurstStartTriggerType(Software); // Could be also Internal
    SetFrameStartTriggerType(External);
    SetExposureEndTriggerType(External);
    SetBurstEndTriggerType(Internal); // Determined by numImages
    SetFrameExposureMode(Sustained);
    PrepareForBurst(-1); // pl_exp_setup_cont() + pl_exp_start_cont()
    SendBurstStartTrigger(); // No-op if called
    // Receive TTL triggered frames
    SendBurstEndTrigger(); // pl_exp_abort()

A live acq. with EXT_TRIG_LEVEL_PULSED:

    SetBurstStartTriggerType(Software); // Could be also Internal
    SetFrameStartTriggerType(External);
    SetExposureEndTriggerType(External);
    SetBurstEndTriggerType(Internal); // Determined by numImages
    SetFrameExposureMode(Combined);
    PrepareForBurst(-1); // pl_exp_setup_cont() + pl_exp_start_cont()
    SendBurstStartTrigger(); // No-op if called
    // Receive TTL triggered frames, N frames, N+1 pulses
    SendBurstEndTrigger(); // pl_exp_abort()

A live acq. with EXT_TRIG_SOFTWARE_EDGE:

    SetBurstStartTriggerType(Software); // Could be also Internal
    SetFrameStartTriggerType(Software);
    SetExposureEndTriggerType(Internal); // Determined by SetExposure(double)
    SetBurstEndTriggerType(Internal); // Determined by numImages
    SetFrameExposureMode(Distinct);
    PrepareForBurst(-1); // pl_exp_setup_cont() + pl_exp_start_cont()
    SendBurstStartTrigger(); // No-op if called
    // SendFrameStartTrigger(); // pl_exp_trigger() to trigger each frame
    // Receive SW triggered frames
    SendBurstEndTrigger(); // pl_exp_abort()

A live acq. with EXT_TRIG_SOFTWARE_FIRST:

    SetBurstStartTriggerType(Software);
    SetFrameStartTriggerType(Internal);
    SetExposureEndTriggerType(Internal); // Determined by SetExposure(double)
    SetBurstEndTriggerType(Internal); // Determined by numImages
    SetFrameExposureMode(Distinct);
    PrepareForBurst(-1); // pl_exp_setup_cont() + pl_exp_start_cont()
    SendBurstStartTrigger(); // pl_exp_trigger() to trigger first frame
    // Receive SW triggered first frame and all others internally triggered frames
    SendBurstEndTrigger(); // pl_exp_abort()

@tomhanak
Copy link
Contributor

tomhanak commented Aug 8, 2022

Remaining thoughts that I didn't want to mix with my previous long comment.

I'm missing getters for trigger types. Instead of caching every trigger type on adapter side in related setters, I would use those getters from within PrepareForBurst(n) and easily check the current combination is supported or not. This way the setters don't need to be virtual. I don't see anything that would "require a lot of state management" mentioned by Mark.

What to do with existing adapter-specific properties related to triggers, like "TriggerMode" in PVCAM adapter? We would like to keep it because most of the PVCAM parameters are presented 1:1 as properties in the browser. Would it be possible to call Set*TriggerType(x) from the adapter side? Will the new UI be properly updated then? I think the current code is unable to sync such setting in both directions, between core/UI and adapter. A bit unrelated, it's e.g. not possible to abort the acquisition from the adapter code, we needed it in PVCAM adapter for custom streaming to disk feature (because the existing MM code cannot stream at 5GB/s...).

Regarding the confusion "what happens if somebody configures the new-stype trigger and then call the existing SnapImage()?". Once the new-style API is implemented in PVCAM adapter, I would cut the old API off by returning some error code as I don't see a reason to support both. Hopefully, this would clean the code up a bit.
It would be up to CMMCore or higher layer to check IsNewAPISupported() and use correct API. Mixing it on a higher layer, like scripts, should be IMHO forbidden as well and well documented. But if you believe the core will be able to mix both, feel free to implement that.

Having four dedicated Set*TriggerType(const char *type) calls versus one generic SetTriggerType(enum TriggerSource source, enum TriggerType type) - I believe the second is better aligned with MMCore. Firstly because most of the existing string function arguments are "label" or "name", and parsing a string is much slower than just comparing numbers. When there is a limited number of values for an argument, there is an enum, e.g. MM::DeviceType, MM::PropertyType. Almost every IDE offers code completion so nobody has to remember valid enum values.

@marktsuchida
Copy link
Member

There's lots going on here, so I'll try to split my comments into readable lengths. I'm also going to concentrate first on the most widely available trigger types and leave rolling shutter modes to the next round of discussions. Also please bear with me if I fail to point out every issue on the first try.

First, regarding terminology and concepts:

  • I don't think the proposal, as it currently stands, is particularly similar to GenICam SFNC. Unless overhauled, I'd have to say that that claim would be misleading. Emulating SFNC is not a primary goal here, but using the same terminology seems like a good idea where applicable, and using similar terms to mean different things would be unfortunate.

  • The biggest discrepancy is that you are using the term "burst" in the context of an ordinary acquisition, whereas GenICam only uses "bursts" when multiple bursts-of-frames are desired within a single acquisition (an advanced feature, probably unsupported by many cameras). The word "burst" has the connotation of "happening in bursts", i.e., multiple trains of frames that are intermittent. I don't see any reason to use this term in the context of a normal acquisition (normal in the sense that frames are either equally spaced or individually timed).

  • SFNC (§5.6) defines trigger types AcquisitionStart, AcquisitionEnd, AcquisitionActive, FrameBurstStart, FrameBurstEnd, FrameBurstActive, FrameStart, FrameEnd, FrameActive, LineStart, ExposureStart, ExposureEnd and ExposureActive. Each of these triggers have a number of settings, such as TriggerMode and TriggerSource. The "Active" ones are level triggers.

    • This simplifies to just AcquisitionStart/End/Active, FrameStart, and ExposureStart/End/Active if we, for the time being, choose not to support bursts and line-by-line control. (I'm not 100% sure about the distinction between FrameStart and ExposureStart -- further study needed.)
    • I think using a different trigger for Start vs Active is simpler than having SetFrameExposureMode() to set variants of a single trigger (if I understand correctly). The simplicity helps when you need to query which triggers are supported by the camera.
  • The PreFrame and PostFrame delays might not be conceptually unsound, but they differ a lot from the SFNC and what I have ever seen elsewhere. The usual API is to allow setting a TriggerDelay (µs) for each trigger (the FrameStart trigger in this case) and to handle the post-frame delay implicitly by setting the AcquisitionFrameRate (to use the SFNC term). I worry that this will require each camera adapter to contort its API into something that corresponds to these parameters, and each app to also convert straightforward concepts like frame rate into something complicated. Our API needs to be easy to implement and use in the common case.

  • SFNC has a TriggerMode (On or Off) for each trigger, and a separate TriggerSource (Software, Line0, Line1, ...). I guess "internal" in your proposal is the closest thing to TriggerMode = Off. But this is confusing for at least some of the triggers: for example, TriggerMode[FrameBurstStart] = Off (SFNC) does not mean an internal (camera hardware-timed) start of bursts occurs; rather, it means bursts are not used. I think this is another area where unnecessary difference from the SFNC is a disadvantage. I am not opposed to combining TriggerMode and TriggerSource in our API, as long as the term "off" is used to get the same behavior as SFNC's TriggerMode = Off. But I also don't see why a separate mode and source would be a problem.

@marktsuchida
Copy link
Member

Now about how to structure individual functions/properties/features/whatever we want to call them. Some of this overlaps with what @tomhanak pointed out above.

  • We semi-agreed that C++ functions would be better than MMDevice properties, for the reason that we don't have a mechanism to standardize property names. But that doesn't mean that we can get away without some of the features that properties offer. With properties, applications can ask if a given property exists and what its allowed values are. If using functions, we need an alternative method to get the same information.

  • If adding all the functions for that ends up looking a lot like reinventing properties in an ad-hoc and haphazard way, maybe the possibility of implementing these as properties should be given another consideration. Or it could be something intermediate: MMCore could expose properties with standard names, while we could require camera devices to provide trigger control (and feature query) through functions (in which case the Core should forbid camera devices from creating properties with the standard names).

  • The fact that properties automatically show up in the Device Property Browser and can be used in config groups is quite significant. There is no alternative for the latter.

@marktsuchida
Copy link
Member

Regarding the API for starting and stopping acquisitions.

I am of the opinion that the scope of this work should be limited to providing an API to specify trigger settings and send software triggers. Changing how to start and stop an acquisition should be a separate proposal if needed.

The meaning of startSequenceAcquisition() currently is very similar to SFNC's AcquisitionStart command (not to be confused with the AcquisitionStart trigger): if the settings are such that the camera will wait for an external trigger, it is startSequenceAcquisition() that puts the camera into that waiting state (sometimes called arming the device). The only difference is that the AcquisitionMode (MultiFrame vs Continuous) and AcquisitionFrameCount (if MultiFrame) is passed together.

(Note that SFNC also has an AcquisitionArm command, whose use is optional. This is similar to MMDevice's PrepareSequenceAcqusition() (sic.).)

So I'm not really sure why a function with a new name (PrepareForBurst()) is needed. Can you point out in what case this becomes necessary?

@henrypinkard
Copy link
Member Author

Thanks to both of you for all the excellent feedback!

I've gone through your comments and provided responses below. I've also made a rough list of areas/issues to consider more or revise in the API based on your comments.

Next steps: Mark and I will meet and discuss these in more depth. After that, I will revise and post a new proposed API. That will be available for more feedback, and simultaneously I will start building a prototype device adapter with this revised API on a Basler camera (chosen because it is similar to GenICam, and because I am already familiar with the SDK having written its adapter a few years ago). Between additional feedback and the hands-on experience of building a prototype, hopefully we can progress towards finalizing this soon.

Things to change and/or discuss further

(please comment if I've missed anything)

  • Change SendFrameStartTrigger() to no block

  • Need a mechanisms signalling to higher level that exposure is finished. Consider a blocking WaitForExposureEnd() in the Core or maybe something like AcqFinished in the CoreCallback API, except for frames.

  • Change old SnapImage() to wait for ExposureEnd-callback, GetImageBuffer to wait for ReadoutEnd-callback (or have it block until an image ready)

  • Can BustEndTrigger can stop a burst with any trigger type? Or is another function needed

  • Functions added to API for getting trigger types

  • Change trigger types to at least enums instead of Strings, but maybe required properties

  • Should trigger mode be cached in the core or cached in the device adapters?

  • Can the core implement the old camera API entirely in terms of the new camera API, such that core.snapImage(), startSequenceAcquisition(), etc. can continue to exist without camera supporting the old API?

  • Burst/SequenceAcquisition terminology choice

  • Start vs Active instead of SetFrameExposureMode()

  • TriggerDelay + framerate instead of pre-frame delay/post-frame delay

  • Combining TriggerMode and TriggerSource

  • Implement trigger types as required properties or functions?

  • Keep scope to only include triggers or also include starting and stopping acquisition?

  • Using a different trigger for Start vs Active instead of having SetFrameExposureMode()?

The SendFrameStartTrigger() should not block, it's not consistent with other Send*Trigger() functions.

Agreed, you're right.

The old SnapImage() should call SendFrameStartTrigger() and then wait for something like ExposureEnd-callback (in PVCAM API it's BOF callback). GetImageBuffer() should then wait for ReadoutEnd-callback (in PVCAM API it's EOF callback).

Yes, that makes sense.

To allow that, the new API needs something like WaitForExposureEnd() or WaitForExposureEndTrigger(), optionally with some timeout argument.

Yes that makes sense.

There are three potential ways to do this, and the current API uses two:

  1. Block for duration of exposure (like SnapImage())
  2. Device adapter calls AcqFinished (like current Sequence acquisitions do)
  3. A separate blocking function like WaitForExposureEnd() as you suggest

I think in this new API it makes sense to stick with only (2) or only (3). My preference would be (3), though I haven't fully thought through any backwards compatibility challenges.

Unfortunately, newest Photometrics cameras are much faster thus BOF interrupts are not generated anymore and BOF callback is invoked right before EOF callback. So the SnapImage() blocks until exposure and readout are completed.

I think that's okay because if they're much faster closing the shutter in between exposure end and readout end is less of a concern

It is not explicitly mentioned anywhere, but I suppose any type of running acquisition/burst will be possible to abort by SendBurstEndTrigger(), even with BurstEndType=Internal. This would imply that BurstEndType=Software doesn't need to be set for live acquisitions running indefinitely in order to make them abortable.

I hadn't considered this before. I could see how it would make sense that SendBurstEndTrigger() could be used akin to an abort function or StopSequenceAcquistion() (in the current API). It would also be possible to add an additional AbortBurst(). Not sure if these two situations should be explicitly separated.

I was unable to uniquely map PVCAM's pl_exp_setup_() and pl_exp_start_() to new API. Based on various settings it is not consistent how/when is the acquisition/burst started.
Mark mentioned that current device interface doesn't have the capability to separate "slow" acq. setup and "fast" acq. start. With PVCAM it is an incorrect assumption that the acq. start is fast. The pl_exp_setup_() call configures the camera and tells the app the size of each frame (or full buffer size in case of a sequence acq. with finite number of frames). The application allocates some buffer of required size and passes it to pl_exp_start_() that prepares DMA transfers, etc. That could be quite "slow" and the slower the bigger buffer is.
Because of that I would do both, acq. setup and start, in PrepareForBurst(n). A side effect would be that in some cases the SendBurstStartTrigger() does nothing.

That makes sense, and I think it is okay to put both pl_exp_setup_cont() + pl_exp_start_cont() in PrepareForBurst(-1). However, I would suggest the user still calling SendBurstStartTrigger() for consistency and having the device adapter set change a boolean of whether frames from the camera are thrown away or copied into the circular buffer. In other words, the camera starts running a live acquisition, but doesn't start saving the frames to MM until the shutter has been opened/illumination turned on and SendBurstStartTrigger() has been sent.

I'm missing getters for trigger types. Instead of caching every trigger type on adapter side in related setters, I would use those getters from within PrepareForBurst(n) and easily check the current combination is supported or not. This way the setters don't need to be virtual. I don't see anything that would "require a lot of state management" mentioned by Mark.

So if I understand, you're advocating for keeping the trigger types cached in the Core instead of the Adapter. Seems reasonable to me.

What to do with existing adapter-specific properties related to triggers, like "TriggerMode" in PVCAM adapter? We would like to keep it because most of the PVCAM parameters are presented 1:1 as properties in the browser. Would it be possible to call Set*TriggerType(x) from the adapter side?

This would be fairly easy to add. Though maybe this argues for keeping the state management in the adapter instead, and adding getter methods to the adapter interface.

Will the new UI be properly updated then? I think the current code is unable to sync such setting in both directions, between core/UI and adapter.

There are callback mechanisms that go from adapter -> core -> UI, although I believe they are incomplete/inconsistent. Maybe @nicost and @marktsuchida can say more.

A bit unrelated, it's e.g. not possible to abort the acquisition from the adapter code, we needed it in PVCAM adapter for custom streaming to disk feature (because the existing MM code cannot stream at 5GB/s...).

I'm not entirely sure I understand the use case here, could you explain a bit more?

Regarding the confusion "what happens if somebody configures the new-stype trigger and then call the existing SnapImage()?". Once the new-style API is implemented in PVCAM adapter, I would cut the old API off by returning some error code as I don't see a reason to support both. Hopefully, this would clean the code up a bit. It would be up to CMMCore or higher layer to check IsNewAPISupported() and use correct API. Mixing it on a higher layer, like scripts, should be IMHO forbidden as well and well documented. But if you believe the core will be able to mix both, feel free to implement that.

Yes, I think that makes sense. I think the core should be able to handle routing to the correct API, though I'd like to test it out. Agree that it should be forbidden at higher levels of software.

Having four dedicated Set*TriggerType(const char *type) calls versus one generic SetTriggerType(enum TriggerSource source, enum TriggerType type) - I believe the second is better aligned with MMCore. Firstly because most of the existing string function arguments are "label" or "name", and parsing a string is much slower than just comparing numbers. When there is a limited number of values for an argument, there is an enum, e.g. MM::DeviceType, MM::PropertyType. Almost every IDE offers code completion so nobody has to remember valid enum values.

Agree

Onto Mark's comments

I don't think the proposal, as it currently stands, is particularly similar to GenICam SFNC. Unless overhauled, I'd have to say that that claim would be misleading. Emulating SFNC is not a primary goal here, but using the same terminology seems like a good idea where applicable, and using similar terms to mean different things would be unfortunate.

It's "Based on", not a replication of the spec. Please point out any changes to naming conventions (in addition to those you already have) that are the same concepts with different names.

The biggest discrepancy is that you are using the term "burst" in the context of an ordinary acquisition, whereas GenICam only uses "bursts" when multiple bursts-of-frames are desired within a single acquisition (an advanced feature, probably unsupported by many cameras). The word "burst" has the connotation of "happening in bursts", i.e., multiple trains of frames that are intermittent. I don't see any reason to use this term in the context of a normal acquisition (normal in the sense that frames are either equally spaced or individually timed).

We disagree on this, and I'm not sure there is an objective correct answer here, though I get where you are coming from. Here are the reasons I see to use the "burst" terminology:

It is confusing that we currently use "Acquisition" to mean a single burst of frames (e.g. startSequenceAcquisition) and multiple bursts of frames (e.g. a multi-dimensional acquisition). Granted, the former is formally a "Sequence" acquisition, but I don't think this is an especially intuitive distinction (it certainly wasn't to me). Changing the terminology to burst would clarify this distinction.

When I read "burst", I think "happens really fast". When I read "bursts" (plural), I agree with you, I think of happening in multiple series of frames. The proposed API is singular, not plural. So it can be used for, for example, to collect a single timelapse of frames as fast as possible. This, to me, should be called a burst. It can also be used multiple times to collect multiple series of frames. For example a timelapse triggered z-stack. We already call something like this a (Multi-D) acquisition, so I think this fits nicely with the paradigm that GenICam describes.

I am not alone in thinking that a "burst" and a "sequence acquisition" are equivalently "a series of frames acquired really fast". Though the API doesn't say it explicitly, we use the term "burst" to mean this in several places in the documentation already (1, 2, 3 -- search for "burst"), and MM users use the terminology as well when asking questions (4, 5). So I don't think this is introducing anything new as much as it is clarifying an existing inconsistency in MM. We could similarly replace "Burst" everywhere in this proposed API with "SequenceAcquistion", but this is more verbose, and in my opinion less clear.

I also think these is less potential for errors and confusion from users if the new API is clearly delineated from the old API in function names. core.SnapImage(), core.startSequenceAcquisition etc. will likely remain in the API for a long time, even if deprecated. With the "burst"/"frame"/"trigger" terminology, there's less overlap.

Finally, I don't see any reason that calling it "Burst" would lead to limitations on the device adapter side. "Acquisition" and "Burst" have analogous trigger types in GenICam (AcquisitionStart--FrameBurstStart, AcquisitionEnd--FrameBurstEnd, AcquisitionActive--FrameBurstActive), it seems like cameras that have no special Burst modes could still pretty easily map their acquisition features to our "burst" API. But where there is an additional special Burst mode available for extra high speed (like the one in Basler), then that's definitely something we want to expose so that users have access to the highest-performance.

I think using a different trigger for Start vs Active is simpler than having SetFrameExposureMode() to set variants of a single trigger (if I understand correctly). The simplicity helps when you need to query which triggers are supported by the camera.

Yes, that makes sense. Can you propose new function names reflecting this?

The PreFrame and PostFrame delays might not be conceptually unsound, but they differ a lot from the SFNC and what I have ever seen elsewhere. The usual API is to allow setting a TriggerDelay (µs) for each trigger (the FrameStart trigger in this case) and to handle the post-frame delay implicitly by setting the AcquisitionFrameRate (to use the SFNC term). I worry that this will require each camera adapter to contort its API into something that corresponds to these parameters, and each app to also convert straightforward concepts like frame rate into something complicated. Our API needs to be easy to implement and use in the common case

This makes sense, I don't have much experience with these settings. I'd like to actually prototype something on a camera to see how it might work and what I'm missing. In the meantime, if you can offer your best guess at what the API should like that would be helpful.

SFNC has a TriggerMode (On or Off) for each trigger, and a separate TriggerSource (Software, Line0, Line1, ...). I guess "internal" in your proposal is the closest thing to TriggerMode = Off. But this is confusing for at least some of the triggers: for example, TriggerMode[FrameBurstStart] = Off (SFNC) does not mean an internal (camera hardware-timed) start of bursts occurs; rather, it means bursts are not used. I think this is another area where unnecessary difference from the SFNC is a disadvantage. I am not opposed to combining TriggerMode and TriggerSource in our API, as long as the term "off" is used to get the same behavior as SFNC's TriggerMode = Off. But I also don't see why a separate mode and source would be a problem.

I see what you mean. This still isn't totally clear to me and I think my understanding would benefit from prototyping. I have a hunch this ties in with how one interprets a burst.

Now about how to structure individual functions/properties/features/whatever we want to call them. Some of this overlaps with what @tomhanak pointed out above.

We semi-agreed that C++ functions would be better than MMDevice properties, for the reason that we don't have a mechanism to standardize property names. But that doesn't mean that we can get away without some of the features that properties offer. With properties, applications can ask if a given property exists and what its allowed values are. If using functions, we need an alternative method to get the same information.

If adding all the functions for that ends up looking a lot like reinventing properties in an ad-hoc and haphazard way, maybe the possibility of implementing these as properties should be given another consideration. Or it could be something intermediate: MMCore could expose properties with standard names, while we could require camera devices to provide trigger control (and feature query) through functions (in which case the Core should forbid camera devices from creating properties with the standard names).

The fact that properties automatically show up in the Device Property Browser and can be used in config groups is quite significant. There is no alternative for the latter.

Good point. if we do it as functions, they're essentially going to be exactly like properties but without being in config groups. It is worth investigating in prototyping how difficult it would be to make standardized properties in the core.

I am of the opinion that the scope of this work should be limited to providing an API to specify trigger settings and send software triggers. Changing how to start and stop an acquisition should be a separate proposal if needed.

It seems to me these are interrelated enough that it would be hard and confusing to do one without the other. I think it is another thing that would benefit from prototyping before reaching a final decision.

So I'm not really sure why a function with a new name (PrepareForBurst()) is needed. Can you point out in what case this becomes necessary?

I think this is essentially the same as PrepareSequenceAcqusition() as you say. Whether its necessary depends on whether starting and stopping acquisition are part of this change. It will be easier, I think, in building a prototype system if it can be done entirely separately from the existing API. If it turns out that starting and stopping shouldn't be part of this, it should be straightforward to fold PrepareForBurst() back into PrepareSequenceAcqusition().

Phew! That was lot. It will definitely be helpful to talk through this more on Zoom.

@henrypinkard henrypinkard changed the title Feedback on new camera trigger API proposal Feedback on new camera trigger API proposal (Chapter 2) Sep 21, 2022
@henrypinkard
Copy link
Member Author

henrypinkard commented Sep 21, 2022

After discussing with @marktsuchida , a new version of the API has been created:

A brief overview of some of the important changes:

Similarity to GenICam

Since the first version of this @marktsuchida has come to the idea that this API should follow GenICam as closely as possible, unless there is a compelling reason to diverge. I too have become convinced of this. The new version has numerous changes to make this a reality (not bothering to list them all here because there are many)

Triggers as properties vs. functions

After further study of GenICam, I don't think it makes more sense to make triggers functions instead of properties. Each trigger type (e.g. FrameStart, FrameBurstEnd), which GenI calls "TriggerSelector", has its own state: On/Off, delay, internal/external/software. Enumerating these for all triggers would either lead to an enourmous number of properties, or would not show all the trigger settings at once in properties. Either way it would be hard to visualize

Furthermore, changing one part of the state at a time (e.g. source=Internal, or activation=RisingEdge) would lead to a property state that is not necessarily valid until some validate function is called, or it would require the adapter to adjust to a valid combination in a complicated and potentially confusing way.

The new API thus includes Set and Get functions, along with a new trigger changed function in the core callback

Caching of trigger states should be on the camera or device adapter

As @marktsuchida correctly points out, this will avoid the core getting desynchronized

Old API/backward compatibility

The old (camera) API for now will be optional on new devices, to be removed later in the future (maybe)

The old (core) API will be implemented in terms of the new camera API when it is present, or fall back on the old camera API when its not

Bursts/Acquisition Terminology

I've come around to @marktsuchida's position of using the Acquisition terminology instead of the Burst terminology. Since we are now going for maximum similarity to GenICam, it seems that Burst is insufficient since there are no BurstStart functions like the AcquisitionStart functions in GenICam.

The distinction between these may not matter so much in practice. For example, in the Basler API, Acquisitions and Bursts are the same thing. But better to stick with what is more explicitly outlined in GenICam. Some users may still think of these as Bursts, and there remains confusion with MDAs. Oh well.

Include AcquisitionStart, AcquisitionStop, etc.

Yes. This seems much simpler to me by including these than by trying to contort existing API to fit them. It also will separate old API from new, which I think will make things cleaner and backwards compatibility easier. The only downside I see is that the Core API will have many functions, some of which have simialr names, which are part different APIs. Open to suggestions to simplify this.

Next steps

The new API (v2) is here

Closing this issue now, and opening a new issue for discussion of v2 of the API and the prototyping process.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants