-
-
Notifications
You must be signed in to change notification settings - Fork 10.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Suggestion] Using a webclient instead sdl app. #313
Comments
Sounds great. Hope this project will remain free , tested similar projects with paid subscriptions like VYSOR still unsatisfied. Been looking forward for something that can fully control a device wirelessly with or without third part app on android and will remain assigned. |
@dondiscaya0531 yeah, what you relly want is a bit more intrincated than just deploying an app, see the example of teamviewer. They have a main application and then an addon app for different mobile vendors. Those "addon" apps, are signed with the platform key(key used to sign the firmware of the rom), so they get privileged access. This means, Sony, Samsung... etc, should sign your addon trought some kind of agreement. Is that or being root/having platform keys for some specific flashed rom on the device. Obviously you could think about using the signed teamviewed addon, calling its service and binding to its aidl interface, but its checking the pubkey of caller app to prevent tampering. Imagine being on this process of ask/review/sign/publish with that much firmware vendors, thats why theres no good remote control outside adb connectivity on android |
Thank you for sharing these details, it seems cool 👍 IIUC, you use a web client, but still need to be connected via One major issue I see is security: this exposes to anyone, via a socket, features that require |
@rom1v well as i said implemention is POC grade, just wanted to test web rendering with Broadway.js, so i dont have problems to listen on my lan for control commands. In my specific case, im not needing adb because ill use my app signed against the platform key of our custom rom, so it will run with system privileges anyway. Apart from that, im wondering if i can spend some of my inexistent free time doing a fork with electron as client, maybe it deserves the try, as ill have to do it anyway on my project. |
Please forgive the offtopic, but as far im doing tests, ive came to a non-root remote control that actually works remotely. Android AccessibilityServices can actually inject input, but not thought inserting input events, instead, you can access all the focus window nodes, then propagate some event (click, edit_text...) on some specific view. I actually find x and y coordinates by finding the element which rect is behind that position. apart from that, service connects to online websockets that I use as c&c It actually has also video with MediaProjection (which asks for permission on each boot) I didnt put the video on the screencap because its confusing. Heres a cap with it So, right there is a non-root remote control solution if you dont mind high latency. Probably good enough for software support and remote configuration. btw, as my tests going, its still usable with screen off (wtf). As it seems accesibility services gives a wide range of options to get views like uiautomator, and interact with them, it also give some handful events for window focus change and injection of global events like back, home, menu, screenshot, display notificationbar,reboot dialog... |
Do the edit text events have the same limitations as in the InputManager? (see #37)
What is the main cause of the high latency?
You mean the screen is off on the device, but is "on" on your computer? |
This isnt raw input, you have to look for editable view on desired position, iterating from root view and excluding views by bounds and not being editable, then you find a TextBox or similar, you can perform
The same as the last issue, views are constantly changing, moving and dissapearing, Each interaction needs that you get the root view of focus, and iterate or find affected view, so it takes some time. Apart from that the feel of the control is very different to raw input since you are restricted to interactions only for the actual root view. I think of course it can be properly coded, in my case, i needed to consume minimum bandwidth possible for remote support, other users like game players will find this option totally useless. As my tests are ongoing, my app is working trought NAT, the service connects to a websocket server (always connected while internet accessible) and waits to control commands, ive tuned mediaprojection to just make static caps that are resized. Those caps are only requested on redraw events. Other thing i dint know about MediaProjection, is that if you check "dont ask again" on cast permission popup, the action persists across reboots, which is nice, the only setup needed for this app is, give camera permission, enable accesibility service and allow mp capture with "dont ask again" checked.
This still needs tons of enhancements but its the only possible non-root truly remote control over internet that should work for Android 5+ |
@rom1v Ive been thinking about latency since you asked, polished a bit the view finder and messed with mediaencoder/mediacodec hell, and yeah, im out of latency.
Ive just broadcast the raw h264 NALu packets and feed to the client browser using websocket channelling and render it with Broadwayjs, i also draw on client some boxes for accessible views and broadcast events between both clients. This encoder is quite similar to used in scrcpy, but ive used async method of MediaEncoder with getoutputbuffer/releaseoutputbuffer on onOutputBufferAvailable event instead continuously dequeue output buffer, anyway, the produced buffer is the same as should be totally compatible rendering for both, as the source being MediaProjection or reflected display from SurfaceControl, just first one dont need rooting. Probably wrapping up a a repo of it, or even offering as service... But for sure i will make a fork of scrcpy with electron, it deserves better client (dont get me wrong, actual client is awesome, but hard to edit) where less experienced users can add more tools, like uiautomator, take recordings, macros... It seems combining electron with adbkit (https://github.com/openstf/adbkit) is the perfect choice to recreate the client. |
OK, thank you for the details.
What are the benefits? I'm interested because I hesitated between synchronous and asynchronous here.
Will it still push a server to the device via
Just for info, there will be recording #292 😉 (it already "works", but it needs more work to be properly integrated). |
Hey @rom1v, as for sync/async encoder integration, ive found more simplistic approach on it, but despite one way or another the data is obtained the same way, its just im a total noob on android/java and i dont know how to properly manage threads and thread-safeness data exchange, btw, i didnt make a shallow copy of buffer data like IO.writefully implementation on scrcpy, do you think i still need to do that? Apart from that, ive found async very convenient and easy to manage, since the service were this stuff is living is an AccessibilityService and i dont control it execution, as is part of android core api. It will be glad to hear your toughts about using sync or async.
as scrcpy server codebase needs at least shell user to work, it has to be launched via adb, ill keep the server as is, and just work on the client, reimplement adb stuff with adbkit lib, reimplement drawing and event broadcasting. btw scrcpy server could be simplified to the extreme if you pipe video trought stdout and call it trought adb exec-out and for input and send input trought monkey server (find it protocol here https://github.com/aosp-mirror/platform_development/blob/master/cmds/monkey/README.NETWORK.txt) afaik reflection is being used on this project due to latency of input commands, but this server is mostly immediate.
Nice job, didnt you think about saving the framebuffer so you can later pick some part of it? this kind of work is why id prefer to have electron client, having this kind of features are way easier to integrate, user could use it easily trought UI, adding controls and windows wont be a pain... Id like to have some package explorer, file explorer, command shell... u know. Maybe for some reason you prefer to keep c+sdl client could use scripting engine like lua, wich being provided with enough api, plugins could be added. As for my project goes, im testing it on real devices without root. Im quite satisfied with the result: https://puu.sh/BWO7i/5fd1e19141.mp4 |
How did you write the data to the socket?
So why do you need MediaProjection or AccessibilityServices?
My first PoC did this. There are several drawbacks compared to a socket:
That's true, SDL has no widgets, and currently scrcpy outputs many things on the console which would be better on the screen (but we could print text using SDL). However, I would be very surprised if electron/javascript had the same performances (framerate, latency, start time, cpu, memory…). Alternatively, it could be also written in Qt. |
Just read a byte[] from ByteBuffer and send it to websocket channel. I readed on the comments some devices would need a shallow copy of data.
Because they dont need adb. I can use it from wireless/4g without any special need apart from enabling the accessibility service and allowing screen recording forever.
Its good you mention it because it helps me to determine why i was having bad video using this method on my firts PoCs too
I can assure rendering performance is top notch, even im using the worst implementation of broadwayjs (no worker thread/no webgl, just decoding bitmaps to a canvas on main thread) If you want to test on my app send me a message, you could access it from internet |
Is it open source? Could you publish it on github or somewhere? |
Well ive started it on saturday... so its quite unmature for publishing it yet, i was talking about sending you some private link to test one of my devices. At this moment im working on the protocol to save all bandwidth i can. I think ill have to do multiple tests using |
Hey @rom1v i have been reading this issue and noticed your question about deployement strategy
Ive tried to use different approach by spawning a server for just input with a local socket that will be used by main process, connected to websockets. With this implementation user has to spawn the process via adb on each boot, but being this done, the process keeps running forever accepting input from the network service. Ive just added this as a second input layer to be used with accessibility, as i can handle custom keyboards layouts without hassle, and still control the device with limited features if user dont start the daemon. apk magic It goes something like... This is a test on a sony xperia z5 without adb or root, connected to internet server. Given this process is running on uid 2000 it can perform lots of tasks apart from sending inputs. for example ive added a minimal terminal, that were quite fast to implement. Im still creating a strategy to evaluate network quality to serve different stream formats the shown in examples is for 3g, has a lot of downscale/4 and 1M bitrate 10fps. The problem is that the streaming only serves data when theres visible changes or I frames, so, measuring traffic meters on time intervals wont do the trick in this case... that method requires upload/download to the throughput, but it isnt our case. Ill try to find better approach... |
This looks amazing. Have there been any updates on this? |
Hello. While I was working on it, scrcpy was going forward and now there are conflict. |
@drauggres I like such experiments, they are interesting, but I don't see how they could be integrated upstream. They provide alternatives to the way scrcpy works. How would you see it? |
@drauggres I have tried your link code and it is successfully run on command but no page is open by default after run successfully |
The HTTP server is listening on port 8000, open http://127.0.0.1:8000 if you are running it on the same machine. |
@pabloko |
(The window with black titlebar is a BrowserWindow) I'm developing an ADB Electron App for regular users(they may know little abt programming). So I decided to render it in browser.I used part of @drauggres's code. |
@pabloko .Thanks for sharing ;it sounds good .Could it be possible to publish it on github or somewhere? |
I was thinking about implementing a web client for original scrcpy-server with WebUSB instead of WebSocket server. https://github.com/yume-chan/ya-webadb/tree/master/packages/demo kudos to @yume-chan |
Hello, i firtsly want to say thanks to the dev of this project where i learned interesting stuff.
As im developing similar project i focused on the communication of data/video and rendering, as my current platform is web-driven i wanted to be able to control the device from a webapp, i set up a simple server (with simplicity far beyond scrcpy) just used MicroHTTPD to serve some http endpoints:
/stream -> that launchs on a thread a process "screenrecord --bit-rate 1M --size 1280x720 --output-format=h264 -" the outputstream of this process is tied to the inputstream of http request, serving a chunked stream of h264 (just like scrcpy does, but with less hassle)
/control?tapx=x&tapy=y -> launches "input tap x y", simple...
/control?text=txt -> launches "input text txt", simple...
one can just use ffmpeg&ffplay to access this stream "ffplay -f h264 -i http://127.0.0.1:8088/stream" but i wanted to draw on web, remember, so im using Broadway.js, that does h264 decoding on client browser with any given raw data. We have to do a break here, for this specific case, triying to use the http feed with xhr need to write a custom inputsource for xhr, but we have new Fetch api liying around that we can use to constantly pump new data from opened http request, then give it to broadway player and render it to a canvas.
https://imgur.com/a/0mYstz5
The input consists on a always focused input that process all keyevents and make xhr calls to control endpoint, also clicks are sent, being transformed from virtual screen->real coordinates.
https://i.imgur.com/IPOXI2X.gifv
The code:
java https://pastebin.com/RQQUvaE9
html https://pastebin.com/mscEYvA8
This sample demonstrates that web rendering is possible even without the need of raw socket/websocket usage, and broadway does the job while outperformed ffmpeg on all my tests.
One could for example render directly on electron app, while still have an easy api to work with other comands/shells/tools on easier graphic mode.
Disclaimer: obviously this code is POC grade and not usable beyond testing, just wanted to share a different point of view on rendering process and possibilitate better UI.
Again many thanks!
The text was updated successfully, but these errors were encountered: