-
-
Notifications
You must be signed in to change notification settings - Fork 674
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to place a button with an icon/image #774
Comments
You're right - the late binding is unrelated to the question you've asked; late binding is a feature of how the underlying image is constructed. The short answer to your question is that there isn't a feature to do that yet. The longer answer is that the right answer depends on what you're trying to do. Adding an image to a button, and responding to clicks on an image are two very different use cases. In the latter, the missing piece is a click handler on the image. This should be reasonably straightforward to add; it's a matter of working out how to capture the platform-native click event. In the former, you're probably going to need a new widget. However, before we add an "image button" or similar, we need to understand the use case. Toga has a very specific design philosophy when it comes to widgets. Any proposal for a new widget needs to follow that philosophy, and I'm not completely convinced that an "image button" would. I'm not saying that an image button is "wrong" - just that there may be a higher level concept that we need to capture. |
For me, the use cases are the same, regardless of how it is implemented. Look at most of mobile apps (and desktop too). They tend to have pictures/icons to click on to navigate or perform some actions. e.g. an email app will have a rubbish bin (trash can) or "X" to delete a message. A lot of apps have header or footer areas with icons. e.g. Facebook app has a home icon (pic of house), friends icon (two people), video icon, alarm bell icon, "menu" icon at the bottom of the screen. I want to click on an "icon" to perform some action (navigate to another page, or do something else). It's a fairly fundamental GUI action. I can't imagine any of the platform specific GUI frameworks not supporting clickable graphical widgets. To me they are buttons (as the prime purpose of a button is to click on it to do something). These buttons are just rendered as a bitmap image instead of "text". That's the only difference. An image that is clickable effectively does the same thing. Generally a button should obviously look like a button (something that is "clickable"). It generally has a border, but these days that's not always the case. At the end of the day I'm not too fussed which path is taken (button with image, or image that is clickable). Whatever fits best with the toga design strategy and makes common sense :) |
The iOS documentation supports buttons with "text" and/or "images". https://developer.apple.com/documentation/uikit/uibutton wxPython supports images and text too. You can specify different images for when the button is disabled or enabled (pressed, focussed, and mouse over). |
The catch is - you've just described several different modes of interaction. Your example of a "delete" icon in the window header area is a toolbar. Toga already has those; see Tutorial 2 for an example. From Toga's perspective, you don't define a toolbar - you define a command that your application can expose, and define that you'd like that command exposed. This manifests as a toolbar, where appropriate, using a platform-appropriate style (Which might not include an icon). An area in the footer with graphical items is something you wouldn't generally see in a desktop app; but in a mobile app, that would be how you switch between different "tabs" of content. That would be an OptionContainer - they're not currently implemented for mobile, but they do exist on desktop. However, on desktop platforms, the "tabs" are usually at the top. Again, Tutorial 2 has an example of on OptionContainer. There are probably some other use cases you could think of, too (for example, selecting from a list/grid of photos). My point is that they're only "buttons" if you think of them in terms of very low level primitives. From the perspective of the interface being shown to the user, they're filling completely different roles - exposing commands; exposing navigation between UI contexts; exposing selection from a list of graphical items. Toga is trying to capture the high level roles, not the primitives. And yes - I'm aware that ImageButton is a common widget for a toolkit to expose. That doesn't necessarily mean Toga is going to have that widget, though. They will be necessary primitives to encode the use cases that Toga captures - but again, from Toga's perspective, we want to capture high level ideas, not primitives. I'm also not fundamentally ruling out the idea of an image button. I just need to be convinced that an "image button" encompasses a high level idea that needs to be captured. |
I understand the high level abstraction goals - however, if I want a big button (with an image) in the middle of my app, for whatever reason, and it does not fit some high level abstraction of a form, I don't see why I would be not allowed to do that. i.e. toga should be flexible enough to not constrain my GUI design/layout. If all I want is one big button with an image, how would/should I implement that with toga? It's either a single image in a box, that is clickable, or a single button in a box (with the button able to render an image). I don't think either of those concepts are platform specific or low level. Toga has the concept of images and buttons already. Most GUI toolkits allow the designer to design their own higher level widgets from a set of widget primitives. Toga's hierarchical box layout architecture fits that model. One of the primitive widgets should be an image that is clickable or a button that can render an image (and/or text). |
I was navigating through my banking app (Commonwealth Bank of Australia) on my iPhone. One of the views has a vertical list of "boxes" that contained a graphic (that looked like a button, but wasn't), some text and other stuff. A click anywhere in the "box" caused it to navigate to another page/view. One potential solution might be to have a callback associated with Toga boxes. That way anything can be clickable - a single image widget in a box, multiple widgets in a box, etc. Any sub-boxes or sub-widgets that are clickable (e.g. button) would have higher priority. If the callback needs to continue up the box hierarchy than the user must specify that - either by setting an attribute, or by coding it in the callback (not my preferred option). Probably want to only detect a click within the padding region (again this could be user set by a bool attribute). Is this something that fits the Toga design philosophy? Do other platform specific GUI libraries offer a similar feature? Makes it easier to "wrap" but isn't a hard requirement as long as clicks can be caught and the inner most clickable area can be determined. |
So I went back and looked at Tutorial 2 and the use of "Commands". I must say I don't fully understand the concept and why it is a good abstraction. The Group thing is also not obvious to me as to what it does. The online documentation for Group doesn't event have a description. In Tute 2 "commands" are created and then these commands are attached to "toolbar". I assume that commands have more generic uses than just toolbars? Maybe they can be used for menus and other things? Commands almost look like a button API. They have a callback, text label, image, tooltip and key shortcut. Can command be placed in box and rendered as a widget? |
Existence of apps in the wild that use a UX metaphor are a good way to make the case for adding a feature to Toga (in some manifestation, at least) - however, there's the minor caveat that the app examples you provide need to be native apps, following the platform's style guide. If you're able to provide screenshots of examples in the wild, that will help immensely. The CBA app you've described sounds like it's using images as a navigation aid, which is either (a) a top level use case, or (b) a violation of platform style guides, depending on the platform - I'd need more details to be sure. A lot of apps out there are built to have "platform agnostic" UIs - which means looking and feeling native isn't a design goal. That's the exact opposite of what Toga is trying to achieve. As for the "high level widgets" vs "low level widgets" discussion - I agree that sort of composability would be useful, but it may be difficult to achieve (while retaining good performance and Toga's higher level API goals) because of the ways platforms compose their widgets. In the short term, we do have composable low level widgets - they just exist at the platform level, rather than the platform-agnostic level. I can see what you're aiming at with adding interaction handlers to box; my question would be "what does the handler do". The answer to that question is what really guides the widget discussion, because that is what captures the UX interaction that is going on. Lastly - commands: Firstly, yes, I know they need more documentation. The key takeaway, though: You define the command once, and then the command appears in both the menu and the toolbar. The high level concept of "there is a thing that the user can do" is wrapped; that concept may have an icon, and a keyboard shortcut - but that concept can manifest in the UX in multiple ways, and have multiple UX triggers. If the command is disabled, it is disabled everywhere - you don't have to track whether the toolbar button and the menu button are in a disabled state. The Groups are used to collect commands together; these groups then guide which menu they appear in, what spacing appears in the toolbar, and so on. |
Seems like Command has a bit of cross-pollination. If a Command is something that can be done from multiple UX triggers (you've mentioned Toolbar and Menu), then isn't specifying a Label and Image is too restrictive. Shouldn't the command contain only non GUI specific stuff? Each UI element that can trigger a command should reference that command. e.g. If I wanted a few buttons, and/or a few images, and/or a few other UI widgets (primitive or composed) to all trigger the one command then that can't be done with the current Command design (as it only has attributes for one image and one text). https://toga.readthedocs.io/en/latest/tutorial/tutorial-2.html Looking at Tute 2 code, I cannot workout how the "commands" are rendered in their end locations (i.e. bee and brutus to left next to each other, cricket ball also to the left but with a gap to brutus, and then finally another bee right justified on toolbar). NOTE: on my Mac, Action 4 brutus is not right justified - it is to the right of the cricket ball. What does What does adding commands with Why is the order of listed commands different in app.commands.add(cmd1, cmd3, cmd4, cmd0)
app.main_window.toolbar.add(cmd1, cmd2, cmd3, cmd4) All the above might be good Toga concepts and GUI abstractions, but they are not immediately obvious to me. e.g. Why a command needs to specify GUI items at all (or just text and image), why a "Group" or "Command Group" has anything to do with menus, etc. It seems that all Toga apps must have the same constraints. If any toga app needs a Command, then it will have a Menu item for it (whether you want it or not). |
Interestingly, Kivy has mixin classes for behaviours. e.g. they have a mixin class for button behaviour, which can be applied to an image. Sounds like a good abstraction. Does/Can Toga do something similar? https://kivy.org/doc/stable/api-kivy.uix.behaviors.button.html The ButtonBehavior mixin class provides Button behavior. You can combine this class with other widgets, such as an Image, to provide alternative buttons that preserve Kivy button behavior. https://kivy.org/doc/stable/api-kivy.uix.behaviors.html#module-kivy.uix.behaviors This module implements behaviors that can be mixed in with existing base widgets. The idea behind these classes is to encapsulate properties and events associated with certain types of widgets. Isolating these properties and events in a mixin class allows you to define your own implementation for standard kivy widgets that can act as drop-in replacements. This means you can re-style and re-define widgets as desired without breaking compatibility: as long as they implement the behaviors correctly, they can simply replace the standard widgets. |
I started to write up (and then accidentally lost) a long response about the underlying UX patterns in the examples you provided - but on further reflection, I think you may have revealed something deeper that I need to incorporate into Toga's high level API vision. There's effectively three conceptual levels interacting here.
In some cases (like button) this will be a relatively simple wrapper; for other widgets, there's a need to use multiple widgets to construct a rich widget meeting the abstract concept. This also gives us an opportunity to expose "useful" APIs, rather than the raw primitives. For example, there's almost no use case for an editable mutliline text view that can't scroll - so we can embed the concept of scrolling into Toga's definition of a mutliline text widget, even though the macOS native text edit widget doesn't include scrolling.
Most widget toolkits - even cross platform ones - don't really try to address (3). My observation (and the thing that I'm trying to fix) has been that this results in applications that have appalling UIs - menu items that appear in completely alien places, Windows UX metaphors appearing in macOS apps, or GUIs that have been "designed" to within an inch of their life by someone who can draw a great visual concept, but has never considered the UX of what they're building. That's what I've been trying to fight against, and I've been doing so by trying to restrict Toga to level (3) APIs that are implemented using (1) directly. This is an extension of the idea that if you make the right thing the easy thing, then people will only do the right thing. However, in trying to build a coherent argument for my position on ImageButton, I'm starting to think that I might not be able to avoid (2) - and that the APIs I've exposed for some widgets doesn't actually meet my self-imposed API discipline, anyway. For example, Level 3 probably shouldn't have a SplitContainer widget. If a UX has a split container, it usually indicates you've got a context selection mechanism that enables you to select something, shows details in the other panel. On desktop platforms, it makes sense that this is a split panel with a tree/list on the left, and a detail panel on the right (switching those directions if you're in a RTL language like Hebrew or Arabic?); but on mobile, you need to do something completely different. Split panels don't work at all on phones, and don't work well in portrait mode on tablets. But there are plenty of apps (e.g., email apps) that have a "select from list, display detail" as a UX metaphor, usually using some form of nested navigation mechanism. In the extreme version of (3), even the concept of layouts is unnecessary. You shouldn't be placing individual text inputs and inputs on a page - you should be describing the information you want to collect, and the application can produce an appropriate form layout that adheres to the platform's style guide (e.g. Should input labels be left or right aligned? Should they appear above the form input or beside? Or as placeholder text?) However, while we might be able to gather a large and highly useful collection of level (3) widgets, there will always be a use case where the user really does need something highly bespoke and custom - but still cross platform. There's another useful interpretation here that helps with some of the "mobile widgets don't have that kind of widget" problem: Level 1: APIs unique to a specific platform (e.g, macOS or Windows) Level 2: APIs are common between platforms of a common type (e.g., all desktop platforms have the same API, but desktop and mobile won't necessarily be the same). Level 3: Same API on all platforms (i.e., both desktop and mobile have the same API) because you're capturing use cases. The concept of 'gathering data to fill out a form' is universal, even though it might have completely different rendition. If you go all the way down this rabbit hole, you could even think of a Siri/Alexa/Google Home rendition of a form. What does this mean for ImageButton? Well, it means that yes - there's probably is a need for an ImageButton at level 2. My question at this point would be whether it's an image or an icon. Most of the examples you've presented would be icons, rather than images (the distinction being that images are arbitrarily sized; icons are square, usually small, and usually with a substantially transparent background, and on many platforms, are required to be bitmaps, not images). I'd argue if you want a truly arbitrary "image" as your button, what you probabaly want is to respond to button click on the image surface, not a button (which would also be a reasonable addition to the ImageView widget). There's also the question of how to handle Image/Icon Buttons that also have text. Most of the examples you provided from the CommBank app have text under the icon; should that be part of the button API (I'd argue it probably should be, as an optional feature of the widget). Does that make any more sense as a statement of architectural intent/API design? Lastly - regarding the Mixin idea: The Mixin approach probably makes sense from Kivy's perspective, but I suspect it's going to be less helpful for Toga. Kivy isn't constrained by level 1 at all - they invent all their widgets from whole cloth. This means Kivy doesn't appear as a native API on any platform - which gives them a lot of freedom to reinvent the API concepts they expose. I'm less convinced it would be helpful when you've got Level 1 in the way. |
Toga must eventually use platform specific native widgets (level 1) to realise the goal of native look-n-feel on the target platforms. Level 2 widgets are just a way of abstracting those to a single API (so users only only write once and the implementation code takes care of calling the underlying native APIs). Level 3 is more a definition/declaration of what the user wants to see, or the information that they want to get, maybe with some hints on how that should occur (could even be a list of alternatives that the underlying code can choose from depending on what the target platform supports?) Providing only the desired content (forms) could make the app look really boring on all platforms. The bespoke aspect could allow the UI designer to jazz it up (apps need to sell themselves too) - with the potential risk on not adhering to platform guidelines and potentially ugly apps. The question for level 3 is do you do a least common denominator or create options for any feature on any platforms (and render if it is supported). I'd argue the least common denominator is not going to create great apps (only simple/mediocre). For the common cases the user should only define what is needed. These are mandatory arguments in the API (or specification language). Extra options/settings/specifications can be applied in various ways (keyword options to constructors and/or an "options()" method/s). in my mind, having a separate method/methods/specifications makes it a little clearer that it you are specifying something "extra". If something needs to be done a constructor time, then optional keyword options also works (e.g. To be honest, I see most of the pressable areas on the commbank app images as being either: To me (c) is the most generic and scaleable. Users can set up a box with an image (and/or text) and have an event handler if a "press" occurs within that box. You can have an API helper that encapsulates common use cases (aka a simple imagebutton, etc). How you specify all that (in code, description language, etc) is another question. |
On a semi-related note, it would be really nice to not have the user provide callbacks for certain defined use cases. e.g. Navigation between pages, screens, views on press of button. Assuming one can define a set of views screen in some kind of navigation hierarchy, then user key presses don't need to be caught and acted upon programatically by user code. The "inbuilt" handler could just automatically navigate to the appropriate view. There would need to be someway to map events (e.g. key presses, timer expiry) to the navigate functions/events. The user should still be able to catch the event if they want to do something special or extra, then call the default handler to do the normal action (e.g. navigate). Could be implemented by calling method of a base class perhaps, or alternatively a pre-action callable/hook? This is probably a separate feature request (see #785). |
Another couple of screenshots from a golf handicap tracking app (call "Handy Cap"). First is main lists of users (golfers) whom I am tracking. I can click anywhere on the list item and it will go to the next view (detailed list of rounds for the selected golfer). There is a 3 vertical dot icon/image/button on RHS which pops up a dialog selection list to "Edit player" or "Delete player". The second and third screenshots is the details of a golfers round. One is a list with no items expanded. The other is a list with two items expanded. Clicking anywhere on the a list item will expand/collapse the item. I'd like to be able to specify/program a toga app to perform the same type of functionality. The |
This discussion seems to have moved on a bit since the last time I looked but going back to the source issue, a button with an image or icon, for mac os there is this https://developer.apple.com/design/human-interface-guidelines/macos/buttons/image-buttons/ |
Any news on this issue? How can I implement a simple clickable button that contains an image? My use case: an app for children who might not (yet) be able to read. |
@arne-cl I have no specific news to report. This isn't an especially high priority for me personally; but if someone wants to work on a PR, even if it only supports a limited number of platforms, I'll happily look at a PR. If someone is motivated to work on this, the design that is likely to be accepts is |
How does one instantiate a button with an icon/image? Or a clickable image?
The docs don't provide much info to me. There is talk about late binding and factories, but I'm not sure if that is relevant and how to use that info :-/
The text was updated successfully, but these errors were encountered: