-
-
Notifications
You must be signed in to change notification settings - Fork 341
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multi cast upstream response for Chucker consumption. #267
Changes from 16 commits
6e2406d
4751cf0
0756d3f
03990f6
624341b
19c9a39
14096fa
f64b429
400ffbc
617ea7c
5772e9d
f0a8ee6
cdce1f0
5b019d9
8f0344b
6cebad6
190d1aa
0e9c832
2f0e139
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,16 @@ | ||
package com.chuckerteam.chucker.internal.support | ||
|
||
import android.content.Context | ||
import java.io.File | ||
import java.util.concurrent.atomic.AtomicLong | ||
|
||
internal class AndroidCacheFileFactory( | ||
context: Context | ||
) : FileFactory { | ||
private val fileDir = context.cacheDir | ||
private val uniqueIdGenerator = AtomicLong() | ||
|
||
override fun create(): File { | ||
return File(fileDir, "chucker-${uniqueIdGenerator.getAndIncrement()}") | ||
} | ||
} |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,7 @@ | ||
package com.chuckerteam.chucker.internal.support | ||
|
||
import java.io.File | ||
|
||
internal interface FileFactory { | ||
fun create(): File | ||
} |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,79 @@ | ||
package com.chuckerteam.chucker.internal.support | ||
|
||
import java.io.File | ||
import java.io.IOException | ||
import okio.Buffer | ||
import okio.Okio | ||
import okio.Source | ||
import okio.Timeout | ||
|
||
internal class TeeSource( | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Please leave some comments around as otherwise this code will harder to maintain. |
||
private val upstream: Source, | ||
private val sideChannel: File, | ||
private val callback: Callback, | ||
private val readBytesLimit: Long = Long.MAX_VALUE | ||
) : Source { | ||
private val sideStream = Okio.buffer(Okio.sink(sideChannel)) | ||
private var totalBytesRead = 0L | ||
private var reachedLimit = false | ||
private var upstreamFailed = false | ||
|
||
override fun read(sink: Buffer, byteCount: Long): Long { | ||
val bytesRead = try { | ||
upstream.read(sink, byteCount) | ||
} catch (e: IOException) { | ||
callSideChannelFailure(e) | ||
throw e | ||
} | ||
|
||
if (bytesRead == -1L) { | ||
sideStream.close() | ||
return -1L | ||
} | ||
|
||
totalBytesRead += bytesRead | ||
if (!reachedLimit && (totalBytesRead <= readBytesLimit)) { | ||
val offset = sink.size() - bytesRead | ||
sink.copyTo(sideStream.buffer(), offset, bytesRead) | ||
sideStream.emitCompleteSegments() | ||
return bytesRead | ||
} | ||
if (!reachedLimit) { | ||
reachedLimit = true | ||
callSideChannelFailure(IOException("Capacity of $readBytesLimit bytes exceeded")) | ||
} | ||
|
||
return bytesRead | ||
} | ||
|
||
override fun close() { | ||
sideStream.close() | ||
upstream.close() | ||
if (!upstreamFailed) { | ||
callback.onSuccess(sideChannel) | ||
} | ||
} | ||
|
||
override fun timeout(): Timeout = upstream.timeout() | ||
|
||
private fun callSideChannelFailure(exception: IOException) { | ||
if (!upstreamFailed) { | ||
upstreamFailed = true | ||
callback.onFailure(exception, sideChannel) | ||
} | ||
} | ||
|
||
interface Callback { | ||
/** | ||
* Called when the upstream was successfully copied to the [file]. | ||
*/ | ||
fun onSuccess(file: File) | ||
|
||
/** | ||
* Called when there was an issue while copying bytes to the [file]. | ||
* | ||
* It might occur due to an exception thrown while reading bytes or due to exceeding capacity limit. | ||
*/ | ||
fun onFailure(exception: IOException, file: File) | ||
} | ||
Comment on lines
+77
to
+89
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I decided to go with a callback instead of magic headers. It gives type safety and is more extensible. In the future it is possible to add |
||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I insist on returning the original
Response
object further in chain without any builders, cloning, etc.We had enough side effects and issues with providing processed or somehow changed objects.
And while I see that here we should get the same object I still don't like that it is recreated.
In some issues we already mentioned that Chucker should be as transparent as possible and this is possible only if we do copy only for Chucker processing without anything like that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see how it would be possible to stream original bytes to two sinks at the same time without replacing the response. This is a mechanism that is used under the hood by
OkHttp
for caching (just a simpler version). It is also recommended way by Square (square/okio#186). Do you have any other approach in mind that would solve the problems addressed by this?