Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Gui #39

Closed
Kethku opened this issue Jun 2, 2021 · 87 comments
Closed

Gui #39

Kethku opened this issue Jun 2, 2021 · 87 comments
Labels
A-core Area: Helix core improvements C-enhancement Category: Improvements E-hard Call for participation: Experience needed to fix: Hard / a lot

Comments

@Kethku
Copy link
Contributor

Kethku commented Jun 2, 2021

I noticed that your website references creating a gui with skia/skulpin. That sounds very similar to my architecture for neovide which is a gui for neovim.

A not broadly stated goal for neovide is to eventually expose the graphical parts as a swappable front end for other text editors such as what helix looks like. I was curious if you are interested in such a collaboration. Getting a gui right is hard work (as I've found out), I think collaborating would be great for both use cases!

Are you planning on exposing a gui protocol ala neovim's? If so what would that look like/what features do you think would be useful for helix that are unique to helix? The editor looks very cool btw!

@archseer
Copy link
Member

archseer commented Jun 2, 2021

Hey! The skulpin mention was specifically after I looked at neovide, it seemed like a good fit. @norcalli wanted to experiment on a new frontend as well.

I'm a bit apprehensive on exposing a GUI protocol, they're either too vague and the client needs to re-implement a lot of the handling, or they're very specific and every client ends up the same. I don't have a solid opinion yet so we'll see. Right now the idea is that helix-view would provide most of the logic, and a frontend would just be another crate wrapping everything with a UI -- that needs some work right now though because commands are currently sitting directly in helix-term.

@Kethku
Copy link
Contributor Author

Kethku commented Jun 2, 2021

That makes good sense. One thing thats been on my list for a while is to split neovide into the neovim gui, and a front end renderer so that other apps could build their own rendering. Then you could integrate it as a crate without having to do all the painful font rendering and such. Let me know if something like that would be interesting for your use cases or not. In any case, if you end up going the skia route, I'm happy to help out if you run into problems. Awesome project

@archseer archseer added the C-enhancement Category: Improvements label Jun 3, 2021
@Kethku
Copy link
Contributor Author

Kethku commented Jun 22, 2021

The more I think about it, the more I am interested in trying out something like this. You mentioned that commands are currently located in the helix-term. Are you suggesting that they would need to be moved into helix-view in order for a gui crate to consume and produce them? Are you interested in a PR which attempts that kind of refactoring?

@archseer
Copy link
Member

archseer commented Jun 23, 2021

So there's a few sets of changes necessary:

  • helix-view needs to be refactored to have it's own generic key/style/color structs. Fix previous broken refactor key into helix-view #345 is a start but it depends on crossterm. Using generic structs would let us drop the crossterm dependency from helix-view.

  • helix-term commands.rs need to be moved into helix-view. The problem there is that the context depends on the compositor written for the TUI, and some commands directly manipulate it by pushing in new components.

    The solution here is to make the compositor a trait so that each frontend can implement it's own. The new trait would have methods like cx.compositor.popup() so that we can manipulate UI without directly referencing the types. But I'm not sure how to handle cases like these

    let contents = ui::Markdown::new(contents, editor.syn_loader.clone());
    let mut popup = Popup::new(contents);
    compositor.push(Box::new(popup));

  • There's a bit of lsp code in term/src/application.rs that would be better off shared somewhere.

With these changes helix-view would end up holding most of the code, and -term would simply be a tiny frontend implementing the compositor & components.

@Kethku
Copy link
Contributor Author

Kethku commented Jun 25, 2021

I'm looking into step 2 now that step 1 has merged. Let me first say back to you what I think I understand you to mean for how we might move the commands out of helix-term. Then I will propose a slightly different strategy with some benefits and drawbacks.

Gui as a component drawer

The solution here is to make the compositor a trait so that each frontend can implement it's own. The new trait would have methods like cx.compositor.popup() so that we can manipulate UI without directly referencing the types. But I'm not sure how to handle cases like these

So what I think you are suggesting is that each frontend would create a custom compositor type which implements the Compositor trait. This trait would have functions on it for each of the custom components that are implemented in the helix-term crate today. Your example is popup, but I'm guessing this would also require picker, prompt, spinner, menu etc.

Some pros for this approach:

  • Custom more native UI can be implemented for each of these effectively giving more control to the gui
  • The gui can implement surfaces however they like. Doesn't have to have everything on a strict grid
  • This is a relatively small change assuming all of the components can be implemented as functions

Some negatives that come to mind:

  • Adding new components requires implementing them in every front end (probably just term and gui, but who knows maybe multiple front ends are created)
  • Guis could be very different in feel from each other. Would configs work the same in all of them? Neovim has this problem because often a config that works great in one gui is a complete mess in another
  • Theres a relatively large implementation burden for the guis. Even though popup and prompt are basically just rendering text in a window of some sort, they would both have to be implemented in the front end

Gui as a render surface

If we look at Neovim's approach instead, it works by defining a basic set of rendering capabilities that the gui must provide and then implements components and such on top of those basic systems. At the simplest, a neovim gui renders a single grid of characters and all of the components and such are drawn by neovim on top of that. A quick and dirty gui can be built in a couple days if you target that system.

If however more gui extensions are implemented, the rendering becomes slightly more complicated. Rather than drawing a single grid, multiple grids can be defined and drawn to. This gives the gui some more freedom to effect the size and positioning of the rendered windows, but components are still done just using that basic framework. Popups are just smaller windows which render text into them. Floating terminals are implemented on floating windows with text just the same as a dialog box.

This also separates innovation in the editor from implementation in the gui. Components can be introduced without requiring them to be implemented in every front end.

Some benefits:

  • Guis are pretty simple. Basically they are just a rendering system which components can depend on to do their drawing
  • Guis don't need to be updated to add new components
  • Many guis can be created which benefit from upstream updates and changes. Can start to build an ecosystem if there is desire to

Some drawbacks:

  • Guis can only innovate so far as the interface allows. It would require some creativity to let guis do special rendering for a given type of component
    -- maybe this can be fixed by giving each rendered window a "purpose" flag which lets the gui special case a certain component
  • Pretty significantly different approach to the current architecture. Would maybe require some intermediate steps
  • Likely locks guis into a grid based interface. I personally feel this is nice because it simplifies design and such, but I can understand why others would prefer more flexibility.

Thoughts

This has been a bit ramble-y. I hope my point was at least somewhat understandable. I'm interested in your thoughts.

@pickfire
Copy link
Contributor

pickfire commented Jun 26, 2021

@Kethku If anyone were to take on GUI probably you are the one that is most familiar with it since you did neovide. My thought is that we may want it as a render surface but with custom components. It is useful in doom emacs where you can draw something not available in the terminal.

image

Like you can see on the left, green for new lines, yellow for modified line. And one more you can see is a single red dash for removed lines, I don't think we can easily render these in terminal.

Before getting into GUI, like neovide I see an issue is that users may not be able to access terminal. Which will break some users workflow. I usually do edit and then ctrl-z to run some shell commands. If we only do a GUI for helix, what if users want to run commands? Do we want to provide like sort of rendering for terminal?

There is also another possibility, getting neovide to have helix backend? Not sure how hard is that but from current viewpoint, helix does not have anything, but yeah neovide currently seemed to only be drawing the same thing as terminal, in some places it would be good to have certain way of drawing which is not possible in terminal.

@Kethku
Copy link
Contributor Author

Kethku commented Jun 26, 2021

Before getting into GUI, like neovide I see an issue is that users may not be able to access terminal. Which will break some users workflow. I usually do edit and then ctrl-z to run some shell commands. If we only do a GUI for helix, what if users want to run commands? Do we want to provide like sort of rendering for terminal?

To be honest, a good terminal emulator inside of helix sounds wonderful. I personally rarely dip into a terminal emulator outside of the one contained within neovim. But I think thats a large can of worms that doesn't need to happen now. What I'm interested in helping with is just making sure the design of helix is conducive to gui front ends in the long run even if its not ready yet.

There is also another possibility, getting neovide to have helix backend? Not sure how hard is that but from current viewpoint, helix does not have anything, but yeah neovide currently seemed to only be drawing the same thing as terminal, in some places it would be good to have certain way of drawing which is not possible in terminal.

I'm definitely thinking about ways to make more full fledged guis using Neovide's rendering system. What I would LOVE is to split Neovide in half so that the rendering part is just a crate which provides channels for sending draw commands and receiving input/events from the window. Then the renderer would handle everything else. Integrating that crate with an editor backend would just require writing some glue.

Neovide isn't there yet because there are lots of neovim specific-isms that would need to be moved around, but I think it would be valuable to do because features built for one editor backend would likely be useful for others.

To be honest though its kinda a pipe dream at the moment. What I was thinking in the short run is to just recreate something like neovide's rendering system for helix specifically in order to get off the ground, and then think about unifying the two later if there is interest. That way I don't impose further constraints on this project that don't really benefit this project long term

@archseer
Copy link
Member

Like you can see on the left, green for new lines, yellow for modified line. And one more you can see is a single red dash for removed lines, I don't think we can easily render these in terminal.

This is doable using box drawing characters (I imagine emacs is using the same thing). https://github.com/archseer/snowflake/blob/e616d7cb24d85cf6b17b77c3e0cb1ef6c4212414/profiles/develop/neovim/init.vim#L294-L299

@archseer
Copy link
Member

archseer commented Jun 26, 2021

If we look at Neovim's approach instead, it works by defining a basic set of rendering capabilities that the gui must provide and then implements components and such on top of those basic systems. At the simplest, a neovim gui renders a single grid of characters and all of the components and such are drawn by neovim on top of that. A quick and dirty gui can be built in a couple days if you target that system.

I was a bit undecided between the two approaches. While the neovim approach works, it leads to most frontends looking very similar. (For example, while I'm aware of Neovide and looked at it's source, I haven't actually used it yet because I didn't have enough reasons to switch.)

A character grid is a good start but is limiting if we want to experiment with inline decorations (see the CodeMirror 6 examples). These might be variable sized or won't fit perfectly in the character grid. Another example would be rust-analyzer's clickable code action links that are embedded into the view.

We're also limited by the terminal in that it's hard to render 0 width multiple selections (cursors) so selections are always 1-width or more (see #362 for more context).

I'm not even sure if the component drawer approach is flexible enough, since we might want to use completely different component types on different frontends. I guess we could allow for frontends to provide their own sets of commands that would augment the defaults with different UI. Maybe it's okay for the frontends to share the same core/view but end up being slightly different (similar to scintilla, or codemirror embedding)

I've been keeping the built-in component list short: https://github.com/helix-editor/helix/tree/master/helix-term/src/ui

Feel free to join us on Matrix to discuss further! (https://matrix.to/#/#helix-community:matrix.org, make sure to join #helix-editor:matrix.org if you can't see it on the list)


Before getting into GUI, like neovide I see an issue is that users may not be able to access terminal. Which will break some users workflow. I usually do edit and then ctrl-z to run some shell commands. If we only do a GUI for helix, what if users want to run commands? Do we want to provide like sort of rendering for terminal?

We won't only be doing a GUI, a helix-term is still going to be the primary implementation. (Well, one of two, I'd like to have the skulpin implementation as "officially supported" too). Down the road we'd want to embed a terminal, but that's quite a bit of work.

@archseer
Copy link
Member

I'd also like to get @cessen's opinion here as well

@Kethku
Copy link
Contributor Author

Kethku commented Jun 26, 2021

I was a bit undecided between the two approaches. While the neovim approach works, it leads to most frontends looking very similar. (For example, while I'm aware of Neovide and looked at it's source, I haven't actually used it yet because I didn't have enough reasons to switch.)

This is a very reasonable critique of Neovim's approach. To me the difference between the two depends almost entirely on if you want to maintain the canonical gui for helix or if you want there to be many different guis which use helix as the backend. If you imagine the main helix gui is basically the only one people will use, then I agree it makes a lot of sense to have component aware logic in the gui because it can tweak the experience to feel just right. There would still be reimplimentation of logic between the terminal and gui front ends, but hopefully that wouldn't be too bad.

However if you want to enable many different front ends, I think that strategy falls apart a bit because any time you want to tweak the behavior of a given component, you have to convince all the front ends to also update their implementations. Its harder to innovate like that.

I'm definitely biased given that neovim is what I've spent the most time with, but I think the vision of a gui api which can be implemented very simply as just a grid, but then augmented with optional extensions if the gui developer wants to customize a given type of behavior is beautiful. It means that if somebody wants to make a gui which is just simple tweaks off the terminal experience, they can pretty easily. And if they want to make something that completely reworks the user experience, thats possible too with a little more effort.

A middle ground

If you are interested in building an ecosystem where multiple groups can implement guis if they want to, then it seems to me that a great way to do this would be to implement the grid based approach. Basically move all the component logic into helix-view, and tern helix-term into a super thin rendering surface. Basically just cursor moves and text on a grid. Then each of the components could be implemented as extensions optionally. If the gui wants to customize the dialog box, theres a trait they can implement for their gui which takes over drawing of dialog boxes completely. And a different trait for taking over markdown renders etc etc etc

This solution would be very similar to neovim's extension model where guis can re implement the tabline or message rendering if they want to, but the difference is that helix's implementation would be implemented with components as the basis from the start. Rather than hacking in extensions for features editors would like, helix has the benefit of starting from scratch and thinking about components as first class extension points.


All that said, these are just some quick thoughts. I'm happy to help out whichever way you end up going.

@CBenoit CBenoit pinned this issue Jun 26, 2021
@CBenoit
Copy link
Member

CBenoit commented Jun 26, 2021

I'll pin this issue because it seems to me it could use more visibility. We currently pin two issues out of the maximum of three, so I went ahead.

@cessen
Copy link
Contributor

cessen commented Jun 28, 2021

A character grid is a good start but is limiting if we want to experiment with inline decorations (see the CodeMirror 6 examples). These might be variable sized or won't fit perfectly in the character grid. Another example would be rust-analyzer's clickable code action links that are embedded into the view.

I think for the document text itself, we'll want to stick with a character grid.

The short justification is: it allows the editor back-end to control text layout. The architecture I'm imagining is basically: the back-end is given the grid dimensions, it computes where to place each element within that grid and passes that information to the front-end, and then the front-end renders the described grid (wherever and however it wants). This keeps the flow of data really simple, and especially avoids the front-end needing to send text layout information back to the back-end for things like cursor movement commands (...which also applies to off-screen text thanks to multiple cursors).

In other words, it lets us keep all of the core commands in Helix independent of the front-end. Conversely, if text layout is computed by the front-end (which I think would be more-or-less required if we forego a grid), then the back-end will have to query the front-end when executing any commands that involve text positioning.

So the grid would essentially act as a text layout API between the front-end and back-end, letting us keep as much code as possible compartmentalized in the back-end. (And as a bonus, that also avoids having to rewrite the layout code for each front-end.)

but is limiting if we want to experiment with inline decorations

I don't think it's quite as limiting as you might think. The back-end would control where things are placed on the grid, but the front-end decides how to render them. So, for example, an inline element could be rendered as a fancy, nice-looking button by the GUI front-end, as long as it fits in the grid area allocated to it by the back-end.

And the front end could provide a list of "size factors" for the various kinds of inline elements, so that the back-end can ensure to allocate enough grid spaces for each kind of element.

We're also limited by the terminal in that it's hard to render 0 width multiple selections (cursors) so selections are always 1-width or more (see #362 for more context).

I have some ideas about this, that I think will be easier to demonstrate with a prototype once I've completed work on #362. Although I realize that might not be especially convincing right now, ha ha.

@cessen
Copy link
Contributor

cessen commented Jun 28, 2021

If we look at Neovim's approach instead, it works by defining a basic set of rendering capabilities that the gui must provide and then implements components and such on top of those basic systems. At the simplest, a neovim gui renders a single grid of characters and all of the components and such are drawn by neovim on top of that. A quick and dirty gui can be built in a couple days if you target that system.

After re-reading this, I realized I might want to clarify: I'm only talking about individual documents, not the entire GUI layout, when I'm discussing the character grid. As far as I'm concerned, the front-end can render everything else however it wants. I'm not totally sure how we would present an API for the remaining commands (e.g. file finder, etc.), but I'm not too worried about us being able to work it out.

@valignatev
Copy link

valignatev commented Jun 28, 2021

Like you can see on the left, green for new lines, yellow for modified line. And one more you can see is a single red dash for removed lines, I don't think we can easily render these in terminal.

This is doable using box drawing characters (I imagine emacs is using the same thing). https://github.com/archseer/snowflake/blob/e616d7cb24d85cf6b17b77c3e0cb1ef6c4212414/profiles/develop/neovim/init.vim#L294-L299

FWIW, emacs can actually draw bitmaps on its fringe and in modeline, it's not restricted to whatever unicode supports: https://www.gnu.org/software/emacs/manual/html_node/elisp/Fringe-Bitmaps.html

Of course, it only works in GUI mode. I'm really interested in a proper gui for the editor as well, but I don't have anything particularly productive to add to the discussion, mostly subscribing for the thread with this emacs remark :)

Of course it would be cool to have a gui that supports custom shaders and all that crazy stuff to fully utilize its hardware acceleration capabilities (if we take neovide as a mental base, I mean)

@Kethku
Copy link
Contributor Author

Kethku commented Jun 28, 2021

If we look at Neovim's approach instead, it works by defining a basic set of rendering capabilities that the gui must provide and then implements components and such on top of those basic systems. At the simplest, a neovim gui renders a single grid of characters and all of the components and such are drawn by neovim on top of that. A quick and dirty gui can be built in a couple days if you target that system.

After re-reading this, I realized I might want to clarify: I'm only talking about individual documents, not the entire GUI layout, when I'm discussing the character grid. As far as I'm concerned, the front-end can render everything else however it wants. I'm not totally sure how we would present an API for the remaining commands (e.g. file finder, etc.), but I'm not too worried about us being able to work it out.

Just wanting to elaborate a bit. I'm not super attached to one option or another, but including in the the drawing api some concept of window/control layout has two main advantages in my mind

  1. The terminal and gui act more similarly. If the window positions are defined in terms of a base grid/position in other windows for things like popups, then people won't have as much adjustment if they want to try the gui.
  2. It allows for progressive implementation of a gui. This may be a non goal, but I think its valuable that Neovim lets you create a functional gui in just a couple days by just implementing the basic grid rendering. Then a given front end could progressively add features by taking over more and more of the process. I think thats a pretty cool and efficient aspect of their gui extension model.

But I do think its a decision to be made and pushes more of the logic into the editor core than may be preferable. And it does limit what a gui can customize somewhat, so I could see value in both options.

@dsseng
Copy link
Contributor

dsseng commented Jul 26, 2021

To get started with designing client-server model and splitting out UI it might be convenient to make a simple web-based GUI for browser, as a temporary GUI without harder to do stuff like GPU rendering. Then a proper GUI can be made based on well-developed and debugged concepts, but natively implemented using wgpu or higher level toolkits like iced and kas.

Overall, this seems like a really dependent on #312. GUI should work through network easily, while backing server keeps track of changes, language features etc. Plugins might also be available both for server and client, depending on their actions. While working locally without inviting others for pair coding, backend might be communicating with GUI via stdio.

@gbaranski
Copy link
Contributor

gbaranski commented Jul 26, 2021

helix-term commands.rs need to be moved into helix-view. The problem there is that the context depends on the compositor written for the TUI, and some commands directly manipulate it by pushing in new components.
The solution here is to make the compositor a trait so that each frontend can implement it's own. The new trait would have methods like cx.compositor.popup() so that we can manipulate UI without directly referencing the types. But I'm not sure how to handle cases like these

let contents = ui::Markdown::new(contents, editor.syn_loader.clone());
let mut popup = Popup::new(contents);
compositor.push(Box::new(popup));

This will solve #507, I'm currently working on this.

@gabydd
Copy link
Member

gabydd commented Mar 15, 2024

I have worked a bit on gpui(mostly Linux text rendering and input) and I don't think it's a great fit for helix, mainly it's a quite a large dependency and does a lot of things that we don't need for helix we mostly just need to render text and lines and get input events everything else we can probably handle ourselves. The next big thing is gpui isn't compatible with Tokio, they use smol and a custom executor for tasks which probably wouldn't mesh well with the event system. I don't have a definitive about what we should use and it may well be possible to use gpui with helix, but it's going to take some time to get the internals in place to add another rendering backend and by that time we can make a decision that fits well within helix's architecture and doesn't lock us in to a specific model.

@7ombie
Copy link
Contributor

7ombie commented Mar 15, 2024

Writing a pixel shader that simply renders a grid of monospaced characters (like a terminal emulator) is pretty trivial (though rendering the glyphs can get more involved, if you want to be as efficient and correct as possible), and it'd be faster (by doing less) than using GPUI. It's also easy to swap out one (platform-specific) renderer for another, as you're only passing the state (which character is in each cell, what colors to use etc) to the GPU.

Assuming the Helix frontend retained its character/tile-based graphics, I can't see any advantage in a UI library.

@pascalkuthe
Copy link
Member

I may be interested in looking into non-grid layout rendering, for example fo side by side diffing non-grid rendering can unlock some nice UI improvement. This would not directly affect the text rendering that would stay grid based just one column (or row) may be filled with non-grid aligned stuff.

I was mostly interested in using something low-level like vello (as opposed to something high level like xilem).

That said all this is really far away right now as helix needs large architectural changes before writing a GUI is even feasible/a good idea.

@wmstack
Copy link
Contributor

wmstack commented Mar 15, 2024

Why not incorporate the Helix keymap into Zed? Zed is just about to overtake Helix in stars, and there seems to be some interest from people over on the other side. I think that Zed is too good of a candidate, it uses tree-sitter for syntax highlighting and is written in Rust.

zed-industries/zed#4642

@sambonbonne
Copy link

@wmstack Correct me if I'm wrong, Helix seems more "terminal-first" and Zed is GUI-only (if a GUI is added to Helix, I hope it will stay optional, it seems it will be the case). Plus, Zed is MacOS-only for now, not everyone use Apple devices.

@tqwewe
Copy link

tqwewe commented Mar 16, 2024

I just want to clarify that Zed can be built on Linux, it's just not super stable yet and they don't provide binaries for it on their downloads page.
But I personally would absolutely love to use Zed but with Helix keybindings, would be a really awesome collaboration

@pascalkuthe
Copy link
Member

Building a modal editor is a really large project on its own. The entire editor needs to be but around the editing model. It's not a simple set of keybindings.

Zed and helix are their invidual projects with orthogonal goals.

@DoctorRyner
Copy link

DoctorRyner commented Mar 16, 2024

@wmstack helix isn't about keymaps at least for me, I remapped it to be similar to vim's (lol), for example movement doesn't do selection by default for me. It's just helix is what vim should have been. Fast, powerful and simple

@DoctorRyner
Copy link

DoctorRyner commented Mar 16, 2024

I'm not that thrilled for gui but I more do wait for plugins because rn I have to keep my own fork to add features from some abandoned merge requests like file tree, heh
CleanShot 2024-03-16 at 22 35 35@2x

@the-mikedavis the-mikedavis unpinned this issue Mar 17, 2024
@7ombie
Copy link
Contributor

7ombie commented Mar 17, 2024

My bindings are heavily customized as well. I just like how Helix works.

@noahfraiture
Copy link

I'm not that thrilled for gui but I more do wait for plugins because rn I have to keep my own fork to add features from some abandoned merge requests like file tree, heh CleanShot 2024-03-16 at 22 35 35@2x

Did merge plugin manager in steel to have that ?

@mabasic
Copy link

mabasic commented Jul 31, 2024

After using Helix for a very long time, I really don't need a GUI at all. It is perfect just as it is.

@ayanamists
Copy link

After using Helix for a very long time, I really don't need a GUI at all. It is perfect just as it is.

With a GUI you could preview pdf in helix, rendering images, ...

@MrHaroldA
Copy link

MrHaroldA commented Sep 19, 2024

I just want to clarify that Zed can be built on Linux, it's just not super stable yet and they don't provide binaries for it on their downloads page.

They do provide binaries now, and it's pretty awesome. It's not there yet, but very fast and usable.

@7ombie
Copy link
Contributor

7ombie commented Sep 20, 2024

After using Helix for a very long time, I really don't need a GUI at all. It is perfect just as it is.

Until you accidentally scroll.

@Rudxain
Copy link
Contributor

Rudxain commented Sep 21, 2024

accidentally scroll

WDYM?

@brunbjerg
Copy link

brunbjerg commented Sep 22, 2024

After using Helix for a very long time, I really don't need a GUI at all. It is perfect just as it is.

With a GUI you could preview pdf in helix, rendering images, ...

Why not use a different tool for that? Part of the UNIX philosophy is that one tool should do one thing extremely well.

Combining Helix with Zathura, Evince or MuPDF (Windows) means that you do not need this functionality in Helix.

If you keep adding code to open-source projects like Helix to make it do additional things, at some point you will start to degrade it?

@Rudxain
Copy link
Contributor

Rudxain commented Sep 22, 2024

Anyone correct me if I'm wrong, but I assume it would be easier to implement draggable-overlays on a GUI than a TUI. It would be nice if users could move overlays (dialogs, popups, modals, hover-hints, etc...) anywhere other than their hard-coded positions.

Another benefit of GUIs, is that it's harder to mess-up the visible boundaries of an overlay. As a big font ("high-zoom") user, I've always suffered from overlays blending with the buffer-viewer, making it extremely hard to read (chars that are supposed to be in the background get "concatenated" with the popup)

@yvt
Copy link
Contributor

yvt commented Sep 23, 2024

Another argument for GUI: Terminal-based apps have no direct control over input method editors (IMEs), which makes working on non-English (Japanese particularly) texts rather tedious because you have to constantly disable and enable IMEs manually to prevent them from intercepting the inputted characters and starting a composition mode or translating them to foreign characters that are not accepted as commands.

GUI can avoid this problem and improve UX with non-English languages by automatically toggling IMEs or receiving raw key strokes outside the insertion mode.

@ayanamists
Copy link

After using Helix for a very long time, I really don't need a GUI at all. It is perfect just as it is.

With a GUI you could preview pdf in helix, rendering images, ...

Why not use a different tool for that? Part of the UNIX philosophy is that one tool should do one thing extremely well.

Combining Helix with Zathura, Evince or MuPDF (Windows) means that you do not need this functionality in Helix.

If you keep adding code to open-source projects like Helix to make it do additional things, at some point you will start to degrade it?

If you're writing a paper in LaTeX and using Emacs, you can "automatically" restore a workspace layout where the LaTeX code is on the left and the PDF preview is on the right, with just a custom function. Another point is that when I press spc m v (the TeX-view command) in Emacs, the PDF automatically navigates to the section compiled from the cursor position in the editor, which is extremely useful when dealing with large documents. To achieve this functionality, it seems the PDF needs to be displayed within the editor.

As for some of your other considerations, I think they indeed make sense.

@hoichi
Copy link

hoichi commented Sep 24, 2024

Another point is that when I press spc m v (the TeX-view command) in Emacs, the PDF automatically navigates to the section compiled from the cursor position in the editor, which is extremely useful when dealing with large documents.

Can this be handled by LSP somehow? E.g., send the view command to LSP and handle it somewhere else?

I mean, Emacs is highly modular, but it tends to have all the modularity within Emacs itself, including window management. Helix, being just an editor, should be more “externally composable”, Linux-style, and better suited for scenarios where windows and their content are handled by terminal emulators/multiplexers, or WMs.

That said, I think nobody stops anybody from writing an IDE-style GUI client using helix as a server. Not sure that hypothetical client, even if it ever gains enough momentum, should be the concern of Helix's maintainers, because IDEs have totally different philosophy from Helix.

@brunbjerg
Copy link

After using Helix for a very long time, I really don't need a GUI at all. It is perfect just as it is.

With a GUI you could preview pdf in helix, rendering images, ...

Why not use a different tool for that? Part of the UNIX philosophy is that one tool should do one thing extremely well.
Combining Helix with Zathura, Evince or MuPDF (Windows) means that you do not need this functionality in Helix.
If you keep adding code to open-source projects like Helix to make it do additional things, at some point you will start to degrade it?

If you're writing a paper in LaTeX and using Emacs, you can "automatically" restore a workspace layout where the LaTeX code is on the left and the PDF preview is on the right, with just a custom function. Another point is that when I press spc m v (the TeX-view command) in Emacs, the PDF automatically navigates to the section compiled from the cursor position in the editor, which is extremely useful when dealing with large documents. To achieve this functionality, it seems the PDF needs to be displayed within the editor.

As for some of your other considerations, I think they indeed make sense.

Good point! I do write a lot in LaTeX

I use "entr", with "tectonic" and "zathura" to get the job done which is unwieldy especially for large files, as you said yourself

@7ombie
Copy link
Contributor

7ombie commented Sep 25, 2024

I'm not sure what the scope is anymore. I thought this issue was weighing the pros and cons of reimplementing the current TUI, using shaders (instead of a terminal emulator) to render the text. The point was to make everything more solid, efficient and responsive, with some scope for graphical elements within the text-based UI (as a bonus).

The conversation is now about building a full, graphical text-editor with PDF viewers and Markdown renderers that uses Helix as a backend.

@DoctorRyner
Copy link

@7ombie well, anything will do at start. But it should be extendable to implement those requests

@7ombie
Copy link
Contributor

7ombie commented Sep 25, 2024

@DoctorRyner - Why though?

If I were taking a crack at this, I'd implement the renderer for a terminal emulator, and use it to directly render the Helix state, just dropping the pretense of communicating with an old terminal over a serial connection. I'd obviously consider Helix-specific requirements, like rendering many cursors, anchors and selections in various colors, but would leave it there.

That implementation would permit extensions, like rendering any texture with dimensions that are multiples of the character-cell dimensions over a rectangular block of characters (with alpha-blending). With that, you could render a PDF document or HTML page somewhere else, then pass the resulting texture to Helix to view it in the editor, but the output would not be interactive (it's just a bitmap), and it would be inline (so would scroll with the text it is blitted over).

That implementation would not permit arbitrarily dividing the screen into graphical panels (with fancy tabs and drop shadows) that can be sized in single-pixel increments, nor rendering interactive graphics inside the panels.

I'm suggesting a custom shader (maybe a hundred lines of code per platform). You're talking about a full graphical user interface. I don't see the former as a viable starting point for implementing the latter.

@Rudxain
Copy link
Contributor

Rudxain commented Sep 26, 2024

Speaking of scope, should this Issue be turned into a Discussion? Or (at least) have an associated Discussion, and repurpose this Issue as a tracker?

@aretrace
Copy link

aretrace commented Sep 26, 2024

should this Issue be turned into a Discussion

@Rudxain yes

@helix-editor helix-editor locked and limited conversation to collaborators Sep 26, 2024
@the-mikedavis the-mikedavis converted this issue into discussion #11783 Sep 26, 2024

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
A-core Area: Helix core improvements C-enhancement Category: Improvements E-hard Call for participation: Experience needed to fix: Hard / a lot
Projects
None yet
Development

No branches or pull requests