-
Notifications
You must be signed in to change notification settings - Fork 162
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enhancement: Proper touch input support #1538
Comments
Just for a quick reference on SDL2 touch control support: SDL2 touch-related events: There is a couple of hints that control SDL2 behavior when it comes to simulating touch and mouse through each other. Some hints are missing in the SDL2 wiki, but their brief description may be found in the source code.
I guess SDL_HINT_TOUCH_MOUSE_EVENTS is the one that matters more for the mobile devices. If AGS has its own proper touch API in script, then this hint should likely be disabled for games with such support, and enabled for games without such support. When receiving mouse events you can distinguish real mouse from emulated using "event.button.which" parameter, which means mouse ID. For emulated mouse ID = SDL_TOUCH_MOUSEID. The SDL2's implementation of synthetic mouse events may be found in the following code (in latest version): |
Stranga pinged me again about this today, I mentioned I believe this is something for a 3.6.1 release. |
So in AGS currently the mouse can imply in |
Trying to come up with a minimalistic approach builtin struct Pointer {
/// Number of pointers, this is a fixed amount
readonly import static attribute int Count; // $AUTOCOMPLETESTATICONLY$
/// Takes pointer ID and returns where the pointer is in game screen, (-1,-1) if invalid
readonly import static attribute Point* Position[]; // $AUTOCOMPLETESTATICONLY$
/// Takes pointer ID and returns true if the pointer is pressed, the finger is on screen or left mouse button is down
readonly import static attribute bool IsDown[]; // $AUTOCOMPLETESTATICONLY$
}; Here's how it works
Here is it's initial version: https://github.com/ericoporto/ags/tree/experimental-pointer-api |
I'm concerned about the bare "Pointer" name, this term has many uses in programming. Are there other alternatives to this? If not, perhaps adding something to it may clarify the purpose. A quick example: "PointerDevice". |
I agree it's a terrible name, could also go with "TouchPoint", "Interaction" or "TouchInput". I would like to somehow have the mouse input be one of the things there just to make it easier to iterate through testing Game Script code in the AGS Editor. I also thought about the API being like, say in |
My first test of the thing https://ericoporto.github.io/public_html/382d947/
It also looks like my screen position calculation is completely wrong. |
I renamed to touch points (I haven't renamed the files too yet, but will eventually) managed struct TouchPoint {
int ID, X, Y;
bool IsDown;
};
builtin struct Touch {
/// Number of pointers, this is a fixed amount
readonly import static attribute int TouchPointCount; // $AUTOCOMPLETESTATICONLY$
/// Takes pointer ID and returns where the pointer is in game screen, (-1,-1) if invalid
readonly import static attribute TouchPoint* TouchPoint[]; // $AUTOCOMPLETESTATICONLY$
}; I still can't figure the screen position calculation. Trying to pickup things from mousew32, because I would want the same position one gets from Edit: testing in a few devices and it almost works, except it's not being clamped to the game borders. Edit: ericoporto@7c1febc fixed it! |
Hey, I would like to try to add multi-touch support to GUIs... but... I can't figure even how they are clicked at all. There isn't anything for GUIs in Global Script, does it happen through some internals? I know buttons can be held when clicking, and the event that we normally use happens on release. With multi-touch, I would like to keep that working in multi-touch, and also to add some event it triggers continually while the button is held. Ah, I think I need to implement some variance of this for multi-touch Line 351 in 466acb5
|
Right, GUIs are not handled in script at all, this is done purely by polling GUI state: see GUIMain::Poll, where it decides which controls are under mouse, and which events to send (mouse over, pushed, unpushed etc). EDIT: I did not think about this earlier, but I'd assume that the touches may be seen as extra "mouse" devices. So instead of checking a single mouse device, engine should have a list of devices (or rather list of specific device state, including coordinates and "button" state), and check all of them in a loop. EDIT2: There's another thing, mouse event such as on_mouse_click currently only includes button as an argument, but that's wrong, it should also include at least position saved at the time when event was registered. That's important to have on its own (as mouse may be moving between updates), but even more important with multiple "devices" that may be pressed during same update in different positions. In other words, it should be:
or similar. |
Uhm... I made my managed struct TouchPoint {
int ID, X, Y;
bool IsDown;
}; Perhaps if the But yeah, the multiple mouse devices approach, even if only internally to ags could work well. Game maker works in this manner. Because of the nature of AGS being a Point-and-Click engine the mouse is integral to it's behavior and affects a lot of things. We do a lot of global assumptions around having a single mouse device.
Yeah, I think so, the first touch in the gui control marks it as being pressed down and only once the last touch point on top of it released the gui control is released.
AH, that is the place. In I think when polling it may not be necessary to tell the ID of the finger, but instead tell it's in the same frame? Meaning, that presses in the same frame could only affect the control state once. I still need to think a little more on this. Edit: it looks like ID of the finger would only be useful to skip processing of the finger that hasn't moved. Also looking more at the code it looks like the highlighting ( |
Uhm, there is a behavior that makes sense for Mouse but doesn't make sense for Touch, that kinda tells me we want to have different Polling for this. It's this click, hold that locks the button in down state. Now in a touch environment, this doesn't happen, the button gets released as soon as the finger is not on top of it, but with mouses this is not expected. I think this signals that we would have two Polling, one for mouse and other for touch. Going back to the script api, cirrus-ci.com/build/5713709612400640 | ericoporto@ags/experimental-touch-api |
Different behavior is better done with either a flag that tells how the device should act, or virtual function(s) overridden in a device class. Having multiple polling loops will complicate code organization (and potentially there may be other differences found in future). |
Had to refresh my memories on this topic, so re-read everything in this ticket, and also in the linked blog post and related docs from few other engines (Unity and Mozilla). I come at this setting following questions:
After reading Unity docs about their Touch struct, I guess I understand this concept of a Touch as a gesture performed by a finger which lifetime spans from the moment of finger's touch first registered, and to the moment when it is unregistered, which (from what I understood) comes next frame after being "unpressed". Since it's a continuous object that may exist for multiple game frame, there's an option of actually returning a persistent object in script. I can't tell whether Unity of other engines do that, but it's an alternative to creating TouchPoint each time user wants to check same touch instance for a duration of multiple frames. Then, the most modern Unity Input API has a distinction between fingers and touches: In the end, it seems, we have 3 potential concepts:
We don't have to implement all of these of course. About mixing a mouse device in. Previously there was a suggestion in this thread to replace "IsDown" property with "MouseButton", but that would prevent having stage property in TouchPoint. Another option that comes to mind is to use mouse button ids as "touch ids". That would reserve few first unique ids for the mouse buttons. But also there could be just a separate MouseButton property in a TouchPoint, which would be None for non-mouse "touches". To summarize, we need user to be able to achieve at least two things:
Now, the history may be recorded by a user in script, if we support pointer down/up/move callbacks. So having it actually present in the Touch struct is optional (we may live without it for starters). Which brings us to the remaining question of a touch state. Supposing we merge "pointer" and "gesture" and have a struct that has a meaning of "pointer touching state". In any case such "TouchPoint" struct should contain:
In both cases we may decide to not allocate this managed object each time a user requested it, but allocate it when it's first asked. |
The touch ID is different between platforms when looking directly from SDL, because it is transparent to the platforms, there is a layer in ags that abstracts this currently. (Unique, but either always increments or uses the first available) |
It makes sense to be platform agnostic. But I am also concerned about proper understanding of what is the meaning of these IDs. I looked into which IDs does SDL provides, there's something that confused me at first. It has "touchID" and "fingerID" (both SDL2 and SDL3). It appears that "touchID" is not really a "single touch id", but a "touch device id" (like a touchscreen, I suppose). While "fingerID" is actually not the index of Nth finger touching, but an arbitrary id of a "touch action": https://wiki.libsdl.org/SDL3/SDL_TouchID
https://wiki.libsdl.org/SDL3/SDL_FingerID
So it sais that it may or not match the index of a finger, so it cannot be relied on as something sequential. |
Alright, so, revisiting the existing engine code again, where it converts from SDL finger id to our finger id... Our ags "finger id" is basically Nth "finger" pressing, it's 0-based and sequential; and if a "finger" in the middle of a sequence was released, then there are gaps that are filled by newly pressed fingers. The same finger id could be used in the "touch point" struct, be an index in array of touch points, and passed as an argument into "pointer" callbacks, binding them all together. I suppose that this lets the TouchPoint work as a "touching finger state", telling what the Nth finger is doing until its released. Then, I'd suggest to modify the API in the draft PR to something closer to examples shown in this issue thread (in past comments above), where the Touch struct has static indexed property instead of returning dynamic array. And also have TouchPoint struct use properties instead of bare fields, as that will make it easier to maintain and expand on need. struct Touch {
/// Number of pointers, may increase as more of them register in game, but never decreases
readonly import static attribute int TouchPointCount; // $AUTOCOMPLETESTATICONLY$
/// Takes pointer ID and returns the pointer state
readonly import static attribute TouchPoint* TouchPoint[]; // $AUTOCOMPLETESTATICONLY$
} But since there may be gaps, the good question is what does TouchPointCount returns and what happens when user requests a non-touching TouchPointer in the "gap". Then the minimal TouchPoint could be like: managed struct TouchPoint {
import readonly attribute int PointerID; // or just ID, since it's own id
import readonly attribute bool IsActive; // ???
import readonly attribute bool IsDown;
import readonly attribute int X;
import readonly attribute int Y;
}; I'm bit conflicted on whether we need "touch phase" thing as an enum, or whether it will be possible or useful. EDIT: hmm, in such case I am not certain about mentioned IsActive property too. Maybe better to leave it out for the time being. The accompanying callbacks should be like:
What if want to have the mouse processed by the same system? |
AGS assumes that the player uses a mouse to play the game. In today times, there is a myriad of devices that support touch input that are used quite frequently, famously our mobile ports should be able to run in such devices. As developers mature they would need to be able to cater specifically for the constraints of such devices. The assumption of a mouse that can click can make some control schemes challenging to support in touch first devices.
I propose we implement either touch specific events or pointer events (mixed touch and mouse) that could be used in the script API in multitouch control schemes.
on_mouse_click
but noton_mouse_down
,on_mouse_move
andon_mouse_release
, it may be better to use the pointer concept and createon_pointer_start
,on_pointer_move
,on_pointer_end
to cover mouse and touch on the same elements.AGS currently has a limit of 5 events in the queue, it may be needed to allow a bigger queue or some touched points may be missed in a frame.Additionally, I would like to propose additional event bindings for the Button GUI Control, so it has additionally a touch down and touch release event (or pointer down, pointer release), and also a property that can be checked (IsTouched / IsPointed)
These are drawn from my observations of other engines, that I wrote about it here: https://ericonotes.blogspot.com/2020/11/a-quick-look-at-touch-handling-apis-in.html
Such API additions would allow for multitouch in GUIs, which can be used to better handle mobile device usage and respond more quickly in the interface (hitting two fingers at screen today will result in a finger being dismissed), and also this allows support for on screen joysticks.
Forum topic
On Screen Joystick example
Using AGS GUIs, it should be possible to construct something like below. Right now, it's not possible since using two or more fingers at the same time is not possible.
Note
https://youtu.be/B_IqYy4T_AA?si=xOASxzLCrI0F1lV8&t=916
In this talk, the Broken Sword dev talks about the recent mobile port and it's interesting how their interface got adapted.
The text was updated successfully, but these errors were encountered: