-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Display message triggers #530
Conversation
Long messages are now abbreviated/truncated. It shows the first message text only, accompanied by a number as "(2)", where 2 is the total number of custom messages. Also added a tooltip that displays each custom message in a paragraph. Still a draft, so will improve the code later to remove duplicated code and avoid iterating the arrays multiple times (if possible; due to Current workflow used for testing:
|
(Big discussion on Element on how to get the output message in the upstream (emitting) task, just needs a small change on the back end). |
e917df4
to
40de5fd
Compare
Working on changes to start working on the updates for this PR. First step is to:
I'm not worrying about which message is coming from which task/job. Instead, every job will have the same 10 messages, only for developing/testing. I've used the following structure const messages = [
{
'label': 'label1',
'message': 'message1',
timestamp: 123456
},
{
'label: 'label2',
'message': 'message2',
timestamp: 234567
}, That should be easy to use in the UI, and we easy to be modified to whichever data model is implemented in the backend 👍 (I changed the JS structure, to have |
Codecov Report
@@ Coverage Diff @@
## master #530 +/- ##
===========================================
- Coverage 82.80% 49.55% -33.25%
===========================================
Files 66 67 +1
Lines 1320 1354 +34
Branches 81 81
===========================================
- Hits 1093 671 -422
- Misses 208 662 +454
- Partials 19 21 +2
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
I'm working on display the messages in the job-details panel. For that I'm also doing something I've been postponing, that's to make job-details a node in the tree, which reduces a bit the logic from the template. How does this look for now @hjoliver , @oliver-sanders ?
The questions I have right now are:
Thanks! p.s.: we have 15 messages in each job now, for testing; there is a flag that is |
I'm not too concerned about alignment and divider, whatever you think looks best is fine for this PR; we can tweak those things later if necessary. I see you're not displaying most-recent-message first yet in the chip ordering? (if there are lots of messages, seeing the only the first 5 while the next 95 come in slowly wouldn't be ideal). We should probably list all the messages in job details, so we can see what hasn't been received yet as well as what has been received, in which case some way of distinguishing the two cases is needed - maybe greyed out for not received? |
Also, the |
Ah! Forgot it must be the Nth most recent messages. Will fix in the next commit. And good idea on displaying pending messages in a different style! Thanks Hilary! |
Perhaps a sub-header called "outputs" or "custom outputs".
Can we use the same alignment as in the previous (job details) section? That looks pretty good.
We're still in α so anything flies! We will definitely need to collapse them somehow as (for some niche cases) there will be a large number. 5 sounds like a sensible number. |
Here's today update:
The items are now aligned equally, in job details & output messages I've changed the mocked data to return the list ordered by timestamp, with most recent messages first. That's useful for the chips, but I left it in the job details panel too. So you see What should we do about the mobile/smaller viewports? |
Looking really good @kinow 👍
If we always displayed the entire list (in job details) it should probably be in the natural order (not reversed like the chips). BUT we intend to truncate that list by default, to say five lines (in this PR, or in a follow-up?), so that would require reverse ordering like the chips to ensure that the most recent messages are prioritized for display. That being the case, it seems to me we'd want:
Can we cut off more chips as the window shrinks, down to a single one, rather than flowing onto the next lines? Hmm, that suggests we should not have a fixed number like 5, but as many as will fit in the current window size? (Is that easy enough to do?) |
@dwsutherland shared this query of what it might look like in the GraphQL implementation. It matches the schema used in this draft PR (thanks a lot David!), so finishing up this PR once we are done with the final adjustments should be fairly simple. query {
workflows (ids: ["*|vix"]) {
tasks (ids: ["foo"]) {
proxies (ids: ["1|foo"]) {
latestMessage
outputs (satisfied: false, limit: 5, sort: {keys: ["time"], reverse: true}) {
label
message
satisfied
time
}
extras
}
}
}
} |
Thanks!
All good for me 👍
Good idea! Let me think a bit how to implement this… 🤔 |
So far the only thing I could think of, is controlling the amount of chips displayed based on the viewport. So small viewport display 1, medium display 3, and bigger ones display 5. However, this works only when the host name is not really long. Right now it looks a bit busy with so many messages in two side-by-side tree views. The simplest approach would be to display just 1 chip, with either the number of messages, or just the first message label? Or do that in the small viewport, and then show 5 when the viewport is bigger (tablet & desktop sizes)? |
I think that would be fine for this PR, and consider improving it later. |
Do you mean to display 1 chip, or doing that when the viewport is small, and keep the 5 when the viewport is bigger @hjoliver ? |
Actually I'm not sure. I don't think it's great to display only a single chip no matter what the view port. Ideally we need to show as many messages as possible without wrapping the v-chips onto the next line. But 5 most recent messages with wrapping in smaller view ports is probably fine for this PR - would you agree @oliver-sanders ? |
I would have thought something like displaying the five most recent with |
I was wondering if we couldn't get the ellipsis in the right place if using CSS overflow like that. But I suppose it could be in another div on to right of the truncated list of v-chips? |
Ellipsis are easier to use with text. With children //etc it gets trickier.
Tried a few variations today, without luck. Hiding when viewport is small doesn't necessarily solve it. In medium size it may still create extra lines due to label width. The easiest approach I found was to make it display the job information, including messages, in a single row, without wrapping. Does that look like a possible approach (maybe for now?)? |
@kinow in @oliver-sanders comment:
I guess by What about doing that, with max 5 chips, but put the ellipsis (or |
I tried that, but couldn't get the ellipsis or indicator of more chips to work. Moving the + sign before the chips could work. I think as long as we are consistent, that should be OK. But if in other elements we have an ellipsis or + after the list, then it would be at least awkward IMO. How does this look? If you have a long task proxy name (or deeply nested hierarchy for parents), several jobs, or a long host name, the scroll bar might be displayed anyway (repair that the But not sure which approach is better for Cylc users :-) so happy to go with either way. |
Hmm maybe you're right. @oliver-sanders - what do you think?
(I actually quite like the 2nd example, but ellipsis in front is a bit quirky). |
ping @oliver-sanders ☝️ 😬 |
The second example is really nice (no idea how it works)! But would want to find some way to get the ellipsis after rather than before or else make it more informational (e.g. it could give the number of custom outputs or say "N more" then disappear when all are revealed). |
Done, from 40 commits, down to the 14 that I could group together. 5 main commits, the rest are addressing feedback (move Tested locally, everything worked. Ready again for review. 🤓 |
BTW, tested after syncing all projects to |
Wow, well done @kinow, that sounds like it was a gnarly one to figure out! (I presume you've put enough comments in to explain how that works for mere mortals?) |
🙌
Bruno opens the IDE to write a comment about the |
The The simple way to force a flow to run again without re-installing it is to:
We will make this a little easier with
|
I missed @kinow 's comment on |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm still pretty hazy on the Vue.js but LGTM, and tests as working really well now. 👍
Here's my facetious output generating workflow with needlessly long message strings:
And here's how it looks in the UI: 👍 Thanks for the scrolling info box. |
Will fix conflicts in a bit. I think you started the workflow with --hold, then used the aotf mutation to start it? A good way to test the UI, especially with workflows that have just 1 cycle. |
…tputs in template, fix padding/CSS
…rn (Vue.set willd efine, and then the observer will capture when lodash sets the value)
824825d
to
bac6970
Compare
These changes close #402
v-chip
's with the message "labels"v-chip
satisfied
,received
, etc) is nottrue
v-chip
's to 5 most recent (remember to check the backend model for a timestamp...)v-chip
's, display an icon showing that we have more; when pressed, open job detailsRequirements check-list
CONTRIBUTING.md
and added my name as a Code Contributor.