-
-
Notifications
You must be signed in to change notification settings - Fork 358
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multiplatform OSM API client #5686
Conversation
# Conflicts: # app/src/main/java/de/westnordost/streetcomplete/data/osm/edits/upload/changesets/OpenChangesetsManager.kt # app/src/main/java/de/westnordost/streetcomplete/data/osmnotes/NotesApi.kt # app/src/main/java/de/westnordost/streetcomplete/data/osmnotes/NotesApiImpl.kt # app/src/main/java/de/westnordost/streetcomplete/data/osmnotes/edits/NoteEditsUploader.kt # app/src/main/java/de/westnordost/streetcomplete/data/osmtracks/TracksApi.kt # app/src/main/java/de/westnordost/streetcomplete/data/osmtracks/TracksApiImpl.kt # app/src/main/java/de/westnordost/streetcomplete/data/user/UserDataController.kt # app/src/main/java/de/westnordost/streetcomplete/data/user/UserUpdater.kt # app/src/test/java/de/westnordost/streetcomplete/data/osmnotes/edits/NoteEditsUploaderTest.kt
I was informed that Ktor is working as intended and streaming is possible via another interface documented here. For that to work (multiplatform) properly, I'll have to wait first that both xmlutil and ktor are based on the same IO library (kotlinx-io). |
# Conflicts: # app/src/main/java/de/westnordost/streetcomplete/data/osm/edits/upload/ElementEditUploader.kt # app/src/main/java/de/westnordost/streetcomplete/data/user/UserDataController.kt # app/src/main/java/de/westnordost/streetcomplete/data/user/UserLoginController.kt
I just did a comparison with my old S4 Mini, instead of the A3 I had with me the past few days. |
Thanks for testing! In debug mode? |
Yes, debug mode. |
Beta of ktor is now based on kotlinx-io. However, it does not expose any |
I had a look at the parsing speed with the optimized library (11813680 bytes read). When parsing the document from a string it parses in 351ms. When reading from inputstream it parses in 2000ms (2829ms on first parse). The manual parsing took about 200ms from a string). Obviously the manual parsing does less (it doesn't check unexpected items) - btw. it is better to loop through the attribute indices rather than look them up by name - the latter is repetitive. |
Oh, interesting! How cool that you checked out this project to test it! I am assuming that you did the test to compare with #5686 (comment) - from 1.6s to 0.35s + 0.2s = 0.55s (if I read this right) is a pretty huge improvement.
You mean e.g. like this? (Pseudo-code) - "member" -> members.add(RelationMember(
- type = ElementType.valueOf(attribute("type").uppercase()),
- ref = attribute("ref").toLong(),
- role = attribute("role")
- ))
+ "member" -> {
+ var type: ElementType? = null
+ var ref: Long? = null
+ var role: String? = null
+ for (attribute in attributes) {
+ when (attribute.name) {
+ "type" -> type = ElementType.valueOf(attribute.value.uppercase())
+ "ref" -> ref = attribute.value.toLong()
+ "role" -> role = attribute.value
+ }
+ }
+ members.add(RelationMember(type!!, ref!!, role!!))
+ } |
I checked out your project, but wrote my own little test class. Reading from inputstream appears to be network constrained. What I mean is yes, that pseudocode should be faster (although somewhat lax, it doesn't check namespaces nor duplication). |
...because there are less iterations. I rewrote so that I iterate the attributes myself and tested the performance. There is almost no difference, though. Actually, it is about 5% slower. My guess why this is marginally slower is because for every element parsed, I have to initialize a few new local variables, these are allocated on the heap, which is not for free and in sum slightly more costly than iterating more (See pseudo-code above). |
I intend to merge this now into v59.0. I have been waiting on this because I am assuming that download+parsing speeds could be improved by letting the XML parsing library (xmlutils) parse the stream of bytes as it arrives through the network, i.e. do the parsing in parallel to the IO and not after. Currently, in this branch, first al bytes are streamed into a string, then that string is parsed. This is potentially slow for large data. There is no Kotlin Multiplatform replacement for Java's For this to work, both xmlutil needs to be able to consume However, long story short, Ktor uses kotlinx-io only under the hood, i.e. it does not expose a The caveat of using this code now, is, as I previously mentioned, that there is a performance loss for downloads of large areas. Since part of the download process is also the persisting in the database and the creation of quests, it comes down to a performance loss of up to 25%. |
# Conflicts: # app/build.gradle.kts
Resolves #5410
Also took the opportunity to make the implementations and naming for all the clients that use the multiplatform Ktor-Client for Http (which is everything now) consistent.
I did a bit of performance testing. Unfortunately, the results are very unsatisfying.
Downloading city center of Hamburg (~35000 elements)
(Done on a Samsung S10e, release APK)
Edit: The download+parse time for ktor+xmlutil has now been reduced to about 5.2s for this particular test
The previously used Java library osmapi...
With this PR, we use
It is clear that the second approach must be slower, especially for large data, because we don't stream anything (not really supported by ktor and xmlutil just yet) but always copy the whole data. On the other hand, getting the response body as a string (rather than as a stream) and copying all the data in the end seems to be not be the cause for why it is so slow.
I am (even more) surprised how slow Ktor-client (and/or CIO) is. I see on the log that the garbage collector is quite busy. What is it doing?