Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mechanism to expose some ophyd device methods directly? #576

Open
dperl-dls opened this issue Jul 26, 2024 · 20 comments
Open

Mechanism to expose some ophyd device methods directly? #576

dperl-dls opened this issue Jul 26, 2024 · 20 comments
Labels
client Relates to client code enhancement New feature or request rest api Potential REST API changes

Comments

@dperl-dls
Copy link
Contributor

dperl-dls commented Jul 26, 2024

As a GUI developer, I am interested in at least three main kinds of interactions between an experimental GUI and the hardware

  1. Running an experiment, where a form is used to specify all the parameters and a button is used to trigger
  2. Making small "adjustments" like "move sample x by 10 um", "rotate the sample 45 degrees", set the backlight brightness on a slider... That is, "live" control of the beamline hardware
  3. Get a passive update of some value, like an OAV image, the current position, etc.

Is there an agreed-upon way of handling 2 and 3?

For 2, you could:

  • (a) wrap these in a tiny plan to be run by the RE, so everything has to go through the RE. For the simplest cases you can use things like bps.abs_set() or rel_set() directly.
    • this seems like a lot of layers of indirection and boilerplate to do something really simple
    • on the other hand you get some stuff for free like not being able to move things while a plan is running
    • but you also can't change the backlight brightness while a plan is running...
  • (b) directly expose specifically chosen set() and similar methods through BlueAPI
    • basically the opposite pros and cons of above
    • status information is more easily propagated to the UI

For 3, a plan is probably really not an option, so it's:

  • expose some read()/get_value()s, etc.
  • interact directly with EPICS
    • IMO this is bad and we shouldn't do it
  • reuse components from the technical GUI - probably the best option where the information we want comes from PVs

Would appreciate any comments from @callumforrester @DominicOram @stan-dot @DiamondJoseph and anyone else relevant any of you can think of. We would like to be in a position where we can start doing some preliminary prototyping of web GUIs fairly soon, so it would be nice to have some kind of agreement on what we want and don't want to support, even if we don't fully implement it yet.

@stan-dot
Copy link
Contributor

stan-dot commented Jul 29, 2024

there is a variety of low-level plans in bluesky plans stubs for this job.

Those are runnable from the swagger gui - i22 instance

Adding another execution environment with tracing etc etc would be a huge effort. The 'moving more than 1 things at a time' is contentious and I am not sure what is the broader outlook on that for the future.

OTOH - what is the science use case for 2 and 3? If something needs adjustment it either can be run before the experiment plan or included into it.

@dperl-dls
Copy link
Contributor Author

dperl-dls commented Jul 29, 2024

It is IMO important that whatever new GUI solution we present is not a regression from GDA, it should support at least the same workflows and features.

For example an example of scenario 2, a scientist may wish to move a sample around to visually identify a region of interest before performing an experiment at that location, increase or decrease lighting to make such identification easier, adjust the flow rates of a cryostream to find a level where ice doesn't form on the sample, or myriad other small manipulations. As far as 3 is concerned, it is simply necessary to display some information about the current beamline state, since there is no sensible way to decide on actions without knowing some of this.

I'm not sure what you mean by "execution environment" but devices should be able to be pulled from the context just as easily as plans.

Indeed, things like the plan stubs are what I meant by 2a. However, this needs some thought. If you set, e.g. the energy, this might take a long time to complete. If this is processed through the run engine, that means that all other features are unavailable while the energy change is processing - that's probably not desired, and it is certainly a regression from current behaviour, where you can start the energy change, and then maniuplate the sample while waiting for it to complete.

@DominicOram
Copy link
Contributor

DominicOram commented Jul 29, 2024

interact directly with EPICS

I think for 90% of the usecases in 2 (certainly for all the ones you've described) this is literally just poke a PV and so I think it should be as thin as possible over the top of that, I don't think an ophyd device is even necessary for most use cases. @coretl could we just pull in parts of the technical UI for this?

Get a passive update of some value, like an OAV image, the current position, etc.

Again, the technical UI components should be able to handle some of these too, where we're just looking directly at PVs.

I'm very keen we don't reinvent the wheel and end up with what we currently have, a DAQ GUI where we've reimplemented large parts of thing that already exist in the technical GUI. I actually think in the ideal scenario we would have a WYSYWYG system like Phoebus that, alongside widgets for interacting directly with PVs (like the technical UI) there are widgets for interacting with ophyd devices or with plans. We can then get to a point where scientists are able to modify UI elements themselves to some basic degree, without huge amounts of coding.

@dperl-dls
Copy link
Contributor Author

dperl-dls commented Jul 29, 2024

If we have some standard way of interacting with PVs that might be fine for many cases, but in some cases we will still have to look at at least ophyd devices (energy, zoom level which needs to change brightness...) and there are still advantages to looking at at least the ophyd level - only having to keep/change the PV in one place for example.

@stan-dot
Copy link
Contributor

@dperl-dls scenario 2 could be done with a series of set_abs or set_relataive plans. and from the GUI perspective we can treat those differently from the big plans. Blueapi is just the API and at the moment GUI prototyping is in the squid repo.

3 is part of the technical GUI, right? we can create the reusable React components and import them both in the technical UI and DAQ UI.

I am operating under the assumption that devices are instantiated when blueapi starts up but all the methods callable on them that change their state need to go through RunEngine.

whatever new GUI solution we present is not a regression from GDA, it should support at least the same workflows and features.
a regression from current behaviour, where you can start the energy change, and then maniuplate the sample while waiting for it to complete.

Current behavior can have possible user-action-paths which are difficult to support, which can have simpler alternatives.
GDA is overengineered - the guarantee of features only must be in the science sense, the UX sense simpler stuff is generally better.

The more possible user-paths are exposed as possible as an API, it's like glucose maze for a slime - user habits will populate them and then expect us to support it. https://www.hyrumslaw.com/

Few well-supported user paths is better than many ways to do one thing - both for the scientists and the support engineers (us).

Specifically for the energy change plan - I am not sure how much time is saved through such parallel manipulation. if this happens often this could be both running inside one plan. If this is rare, maybe it is fine to wait 30 seconds once a day.

By execution environment for plans I mean the RunEngine which runs the function that returns a Message Generator. This environment takes care of logging and other low-level details. We would need some other kind of wrapper around asyncio.eventloop to run the coroutines to execute device methods. This new lightweight wrapper would require some volume of work to develop and maintain. Which might turn out to be easy, but I haven't checked that.

@dperl-dls
Copy link
Contributor Author

To look at it the other way is that the behaviour currently offered by GDA is what has been created in response to 20 years of user demands - in the absence of contradicting information that interface defines everything we actually know about "science features".

The phenomenon described in your link does not apply to this scenario:

  • There are not "sufficiently many users" - only roughly 100
  • This behaviour is not a bug or a private implementation detail - it is explicitly desired and we will have to build some way of supporting it, whether that means using abs_set() etc or wrapping small things (zoom change) in stub plans or interacting with ophyd devices directly

Specifically for the energy change plan - I am not sure how much time is saved through such parallel manipulation. if this happens often this could be both running inside one plan. If this is rare, maybe it is fine to wait 30 seconds once a day.

Perhaps I still have not been clear enough about what is needed here. Case 2 describes "live" control of beamline hardware. Press an arrow, a motor moves, scroll on the oav and the sample rotates, etc. Currently it's not possible to inject new messages into a running plan, as far as I'm aware, and building a mechanism to incorporate user input like that sounds far more complicated to develop and maintain than executing device methods.

@DiamondJoseph
Copy link
Contributor

I agree that 3 is probably part of the technical UI [from coniql as part of The Graph with a standard set of components for watching particular important signals?]: I think this is limited to 10Hz which should be enough for human reactions?

Is 2 in the context of "between scans" or "during scans"? Between scans yes we can expose the existing stubs fairly easily (just needs annotating them with the correct return type and deciding where those stubs live [if annotations added to bluesky.plan_stubs maybe an argument for which should be exposed, e.g. "between" scans should only ever mv not set without waiting).

Moving other signals during scan is more complex use case that we need to get right. Probably there is only some subset of the stubs that may be allowed during other scans.

@dperl-dls
Copy link
Contributor Author

dperl-dls commented Jul 29, 2024

"Between scans" is the important part for sure - I don't know of any reason why we would want to do it during scans

components from the technical GUI cover most of 3, but there are probably cases were we want to look at ophyd signals which don't directly correspond to a PV

@DominicOram
Copy link
Contributor

Yes, it would be "between scans" but if we're using plan stubs then it would still have to be "during plans". I need to be able to press a button to move the detector and press another to move the sample whilst the detector is moving.

@DiamondJoseph
Copy link
Contributor

https://github.com/DiamondLightSource/dls-bluesky-core/blob/main/src/dls_bluesky_core/stubs/wrapped.py

Here's the stubs we currently expose on i22 and the test beamlines, personally I'd like to see the 2 set methods and wait removed (if you want to set an axis moving and then run a scan it should be part of your plan, as it's a particular experimental behaviour that needs to be repeatable).

Are there any additional pre-existing plan stubs that you think should be included?
https://github.com/bluesky/bluesky/blob/main/src/bluesky/plan_stubs.py

I think the annotated subset of stubs that we support should either be in dodal or blueapi.

For running stubs during the execution of a plan I'm going to defer to bluesky/bluesky#1652 with the following assumptions and notes:

  1. Nudging a motor should be possible during [some segment of] a plan
  2. The same UI component should be usable to nudge a motor before and during a scan
  3. A plan in which a motor can be nudged should record and document that the motor was nudged

Here motor and nudged are actually "Signal" and "Set"

@DiamondJoseph
Copy link
Contributor

I need to be able to press a button to move the detector and press another to move the sample whilst the detector is moving.

Is this just for optimising moving between beamline states?

@dperl-dls
Copy link
Contributor Author

I'm not sure bluesky/bluesky#1652 really covers this? that ticket describes plans which can branch off into sub-plans, but I don't think it means you will be able to do

RE(plan_1())
RE(plan_2())

and have them run in parallel, no?

@stan-dot
Copy link
Contributor

There are not "sufficiently many users" - only roughly 100

a user is different for each experiment, each kind of scientific experiment can be approached differently

we still got many divergent ways people use GDA and 3 different implementations of the same feature

@DiamondJoseph
Copy link
Contributor

Here's my thinking as a pseudo plan, interupts is an adaptive scan that listens for incoming stub requests, and adjustable_plan is the part of the plan that May (but does not require) manual adjustment.

def my_plan(*args, **kwargs):
    yield from prepare_plan()
    sub_status = yield from run_sub(interupts())
    yield from adjustable_plan()
    yield from wait(sub_status)
    yield from teardown_plan()

async def interupts():
    status = AsyncStatus(complete=True)
    for stub in yield from api.await_stub_requests():
        status &= yield from manual_stub()
    return status

@dperl-dls
Copy link
Contributor Author

hm, yes, okay, that could be a good way to do it - it ensures that you have control over the stubs that you can run, and prevents actually launching a scan, etc.

@stan-dot
Copy link
Contributor

the use of the adaptable scan for this purpose seems really like a great way out of this puzzle @DiamondJoseph

I am glad we might not need a new asycio.event_loop

@coretl
Copy link
Contributor

coretl commented Jul 29, 2024

It would appear I'm late to the party, but we had this discussion over in bluesky/bluesky-queueserver#292 (comment) and it looks like the proposal they made was to put jogging actions and monitoring in a second process, but still use the ophyd objects.

Personally I'm still on the fence between doing the interface via bluesky and via the technical UI. Both have advantages and disadvantages as outlined above.

@stan-dot
Copy link
Contributor

from the blueapi POV we have:

  • question 1 - is whether we have a clear technical upstream bluesky org reason to adapt to whatever is done there
  • question 2 - if it is decided for the main logic to be handled upstream, then the exposition of that logic in blueapi must be later than that, making the discussion in this issue more speculative. It also might be reconsidered to just adopt bluesky queserver instead of blueapi @callumforrester .
  • question 3 - if time is of importance, it looks like the advantages and disadvantages aren't outlined in a clear table - a meeting could be set to create this and then an ADR with a review date of 2/6 months? I am not sure how much of an irreversible commitment would this choice be

@dperl-dls is it for the next 2-3 weeks or speculative 'will need it at some point'?

@dperl-dls
Copy link
Contributor Author

We won't need it in the next two weeks, but maybe in the next two months? Like I said, it would be good to have the discussion about it pretty soon

@stan-dot stan-dot added enhancement New feature or request rest api Potential REST API changes client Relates to client code labels Sep 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
client Relates to client code enhancement New feature or request rest api Potential REST API changes
Projects
None yet
Development

No branches or pull requests

5 participants