Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

aaudio crash, assert in releaseBuffer #535

Closed
taemincho opened this issue Jun 4, 2019 · 36 comments
Closed

aaudio crash, assert in releaseBuffer #535

taemincho opened this issue Jun 4, 2019 · 36 comments
Assignees
Labels
bug P1 high priority

Comments

@taemincho
Copy link

Hi, we've published a beta version app which first uses Oboe, and got reported a lot of crashes look similar to below:

#00 pc 000000000001a48c /system/lib/libc.so (abort+63)
#1 pc 00000000000065c3 /system/lib/liblog.so (__android_log_assert+154)
#2 pc 00000000000340f9 /system/lib/libaudioclient.so (android::ClientProxy::releaseBuffer(android::Proxy::Buffer*)+120)
#3 pc 0000000000024c3b /system/lib/libaudioclient.so (android::AudioRecord::releaseBuffer(android::AudioRecord::Buffer const*)+66)
#4 pc 0000000000024d3d /system/lib/libaudioclient.so (android::AudioRecord::read(void*, unsigned int, bool)+212)
#5 pc 000000000001371d /system/lib/libaaudio.so (aaudio::AudioStreamRecord::read(void*, int, long long)+92)
oboe::AudioStreamAAudio::read(void*, int, long long) at /android/oboe/src/aaudio/AudioStreamAAudio.cpp:372:26

Any idea what could cause these crashes ?

@atneya atneya added the P1 high priority label Jun 4, 2019
@dturner dturner assigned dturner and atneya and unassigned dturner Jun 4, 2019
@atneya
Copy link
Collaborator

atneya commented Jun 4, 2019

Are there any other details when/how the crash is caused?

Are you using callback methods or blocking reads/writes?

Any specific device versions/API levels?

A preliminary guess is the AudioStream is closed, while a blocking read is pending, causing a crash when the read attempts to access the closed stream. However, any additional details would be extremely helpful.

@taemincho
Copy link
Author

Thank you, I found that this could happen when our app goes background.

@dturner
Copy link
Collaborator

dturner commented Jun 7, 2019

Hey @taemincho, any chance you can share a few more details of the problem so that we can advise other developers on how to avoid the issue? Closing streams is a particularly tricky topic.

@philburk
Copy link
Collaborator

You are using a blocking read.
oboe::AudioStreamAAudio::read(void*, int, long long)

Perhaps the stream is being closed by another stream while the read is still blocked.
It sounds similar to this:
#359

Do you have this fix?
#364

@atneya atneya added the bug label Jun 11, 2019
@taemincho
Copy link
Author

Sorry for the late reply, I was on vacation.
We're using 1.2-stable version, and I'm sure it has #364 fix.

We created input stream and output stream and used a blocking read from the input stream inside the output stream callback.
We closed the input and output streams in various situations such as when the output stream's onErrorAfterClose is called (AAudio only), when headphone plugged in/out, when the app lost audio focus and etc.
So yes, it was possible to be being closed the input stream while oboe::AudioStreamAAudio::read(void*, int, long long) is still blocked.
Now, I set a lock for preventing the input stream is being closed while the read is blocked.

However we still get many crashes that look like below

#00 pc 0000000000049c64 /system/lib/libc.so (tgkill+12)
#1 pc 0000000000047403 /system/lib/libc.so (pthread_kill+34)
#2 pc 000000000001d555 /system/lib/libc.so (raise+10)
#3 pc 00000000000190a1 /system/lib/libc.so (__libc_android_abort+34)
#4 pc 0000000000017104 /system/lib/libc.so (abort+4)
#5 pc 000000000000c3d9 /system/lib/libcutils.so (__android_log_assert+112)
#6 pc 000000000006f8f9 /system/lib/libmedia.so (_ZN7android11ClientProxy13releaseBufferEPNS_5Proxy6BufferE+88)
#7 pc 000000000006ce4b /system/lib/libmedia.so (_ZN7android10AudioTrack13releaseBufferEPKNS0_6BufferE+82)
#8 pc 000000000006d6e7 /system/lib/libmedia.so (_ZN7android10AudioTrack18processAudioBufferEv+1654)
#9 pc 000000000006ee9b /system/lib/libmedia.so (_ZN7android10AudioTrack16AudioTrackThread10threadLoopEv+130)
#10 pc 000000000000e4b1 /system/lib/libutils.so (_ZN7android6Thread11_threadLoopEPv+264)
#11 pc 0000000000046ed3 /system/lib/libc.so (_ZL15__pthread_startPv+22)
#12 pc 0000000000019aed /system/lib/libc.so (__start_thread+6)

Any ideas?

FYI: I found the same error reported in Stack Overflow.
https://stackoverflow.com/questions/46403487/android-weird-audiotrack-crash

@milos-pesic-sc
Copy link

I can also report that we are experiencing relatively frequent crash which matches with the stack overflow post @taemincho posted in the comment above:

backtrace:
  #00  pc 000000000001ceae  /system/lib/libc.so (abort+58)
  #01  pc 0000000000006d0d  /system/lib/liblog.so (__android_log_assert+156)
  #02  pc 000000000003bd4d  /system/lib/libaudioclient.so (android::ClientProxy::releaseBuffer(android::Proxy::Buffer*)+108)
  #03  pc 0000000000036da3  /system/lib/libaudioclient.so (android::AudioTrack::releaseBuffer(android::AudioTrack::Buffer const*)+78)
  #04  pc 0000000000037765  /system/lib/libaudioclient.so (android::AudioTrack::processAudioBuffer()+1908)
  #05  pc 0000000000039501  /system/lib/libaudioclient.so (android::AudioTrack::AudioTrackThread::threadLoop()+128)
  #06  pc 000000000000c107  /system/lib/libutils.so (android::Thread::_threadLoop(void*)+290)
  #07  pc 0000000000064829  /system/lib/libc.so (__pthread_start(void*)+140)
  #08  pc 000000000001e375  /system/lib/libc.so (__start_thread+24)

We use callbacks and not blocking reads though.
Based on some crashlytics logs for this issue - as far as I can see - the crash appears in two situations 1) at the beginning of a track - when we start the stream
2) after ErrorDisconnected error (where we recreate and restart the stream). Last log lines I can see in this case:

D/CrashlyticsCore [2019-06-24 13:51:56.944349 (tid:3445725552)] - onErrorBeforeClose - Error on AudioStream : ErrorDisconnected
D/CrashlyticsCore [2019-06-24 13:51:56.946371 (tid:3445725552)] - onErrorBeforeClose - AudioStream device id: 317
D/CrashlyticsCore [2019-06-24 13:51:56.947362 (tid:3445725552)] - onErrorAfterClose - Error on AudioStream ErrorDisconnected
D/CrashlyticsCore [2019-06-24 13:51:56.947500 (tid:3445725552)] - restartStream - Restarting stream
D/CrashlyticsCore [2019-06-24 13:51:56.955914 (tid:3445725552)] - internalCreateStream - AudioStream opened. Device id: 1
D/CrashlyticsCore [2019-06-24 13:51:56.956079 (tid:3445725552)] - onErrorAfterClose - Error after close callback done

@atneya
Copy link
Collaborator

atneya commented Jun 24, 2019

This is an assert failure in libaudioclient. Do either of you have a full logcat of the crash (including libaudioclient error messages)? That would make debugging this particular issue much easier, since it seems difficult to reproduce.

@milos-pesic-sc
Copy link

@atneya Unfortunately - we don't. We don't collect full logcat from the field and we couldn't reproduce the issue in house.

@milos-pesic-sc
Copy link

milos-pesic-sc commented Jun 25, 2019

But maybe device list from developer console for the crash could help:

HWVOG	271	9.6%
HWINE	158	5.6%
starqltesq	134	4.8%
HWMAR	115	4.1%
HWJSN-H	110	3.9%
HWEML	109	3.9%
crownqltesq	109	3.9%
HWANE	99	3.5%
star2qltesq	96	3.4%
HWELE	94	3.3%
dreamqltesq	83	2.9%
HWJKM-H	74	2.6%
beyond2q	65	2.3%
dream2qltesq	60	2.1%
HWPOT-H	56	2.0%
greatqlte	52	1.8%
HWLYA	52	1.8%
j4corelte	48	1.7%
HWVTR	46	1.6%
HWHRY-H	42	1.5%
Others	944	33.5%

@atneya
Copy link
Collaborator

atneya commented Jun 27, 2019

@milos-pesic-sc
The device list is actually quite helpful -- all of the devices listed are Samsung/Huawei , which makes the issue easier to pinpoint, and perhaps explains the difficulty in reproducing it.

Are you able to get a full list of devices (expand Others) to confirm?

It is unlikely that this is an Oboe specific issue (especially considering the StackOverflow post linked refers to using OpenSLES), however all additional information makes debugging the underlying issue easier. Although, perhaps Oboe is exacerbating/exposing an underlying issue, and we can implement a workaround.

Did you encounter this issue when using OpenSLES without Oboe in the past?

@taemincho
The original stack trace referred was on the input side, using blocking reads. The second stack trace uses callbacks on the output side.

Were the original crashes on the input side resolved by using a lock? Were the output crashes present prior as well?

Do you have any additional device details/logs/information regarding when the crash occurs?

@philburk philburk changed the title aaudio crash aaudio crash, assert in releaseBuffer Jun 27, 2019
@milos-pesic-sc
Copy link

@atneya
Unfortunately - I can't see further breakdown of devices under Others.
I just double checked and this issue doesn't exist with our openSLES implementation as far as I can see.
I will try to find some additional information that could be useful for debugging - there is nothing obvious though.

@atneya
Copy link
Collaborator

atneya commented Jun 28, 2019

Internal reference: b/136268149

@milos-pesic-sc
Copy link

@atneya Do we have more info on this one? Any updates?

@atneya
Copy link
Collaborator

atneya commented Aug 6, 2019

No updates on internal bug as of yet, assigning to @philburk to track internal.

@atneya atneya removed their assignment Aug 6, 2019
@philburk
Copy link
Collaborator

This is being tracked internally at b/136268149 

@philburk
Copy link
Collaborator

philburk commented Oct 3, 2019

This bug seems to be related to AudioRecord read() and AudioTrack write() not being thread safe.
An app should never read or write a stream from more than one thread.

And if a stream is using a callback then the app should not also call a blocking read/write for that stream from any thread.

@taemincho - Please check your app to see if it might be doing something that is not thread safe.
If your app is OK then please assign this back to me.

@gkasten
Copy link
Contributor

gkasten commented Oct 10, 2019

Do you have the full log message, for example a line starting with "Abort message: " ?

@gkasten
Copy link
Contributor

gkasten commented Oct 10, 2019

Also am looking for a reproducible test case, in other words ... a sequence of steps that are likely to show the bug. Thanks!

@Nordhal
Copy link

Nordhal commented Nov 28, 2019

Hi,
We have the same issue in our apps using both Oboe/OpenSL and Oboe/AAudio.
The assert description is as below

A/DEBUG: Abort message: 'releaseBuffer: mUnreleased out of range, !(stepCount:570 <= mUnreleased:0 <= mFrameCount:1950), BufferSizeInFrames:1950'

I am able to reproduce it on the MegaDrone Sample, only in openSL mode but I think it is a good start.
This issue is occuring when a device stream is changing, and therefore not having the same buffer size at all.
In my case, I use a Samsung S9 + a bluetooth speaker.

Easy Reproducing steps in Megadrone :

  • for testing purpose, I removed the toggling when TOUCH_UP was occuring
  • Set the Audio API to OpenSLES
  • Start the sound
  • Switch to bluetooth speaker
  • When connected, switch off the speaker

After doing this for some times you will have a crash.
What's pretty weird is I never had a callback ErrorAfterClose, whereas I have on in AAudio mode.

This crash is the number one of our app Remixlive , after we updated our engine to use Juce + Oboe.
It made it very unstable, causing more than the 1,4 % crash rate allowed by the play store unfortunately.
I hope these infos will help understand what's going on, since I was pretty impressed by Oboe 1.3 so far, and really want to make the switch for all our apps.

@philburk
Copy link
Collaborator

Thank you so much for describing that clear repro case. That's the most important step in fixing a bug. It is Thanksgiving weekend here in the US but I will try to verify the repro steps and then we will tackle it next week.

By the way have you tried putting a 200 msec sleep in the onErrorBeforeClose callback?

@Nordhal
Copy link

Nordhal commented Nov 28, 2019

Hi,
First of all, Happy Thanksgiving !

I just reviewed Audio Stream implementation for OpenSLES, and onErrorBeforeClose or onErrorAfterClose is never called.
It is just used as part as the AAudio implementation unfortunately. I will do further investigation during these days and I'll keep you informed if I find anything useful.

Have a great weekend

@philburk
Copy link
Collaborator

philburk commented Nov 28, 2019

Yay! I made the changes you suggested to MegaDrone.
Then I ran it on a Pixel 3a running Q and connected to GiD6B bluetooth headphones.
I started the sound and then turned the headsets off and on every ten seconds.
Within 3 cycles I got the releaseBuffer assert!

@philburk
Copy link
Collaborator

I just reviewed Audio Stream implementation for OpenSLES, and onErrorBeforeClose or onErrorAfterClose is never called.

Oh right. OpenSL ES does not disconnect the stream when you change devices. It switches automatically. But this can cause problems with buffers getting bigger or smaller and causing glitches, or crashes like this. That is why AAudio tries to disconnect the stream when the device changes.

@Nordhal
Copy link

Nordhal commented Dec 10, 2019

Hi,
Did you find any clues concerning this crash ? Is there a way to stop AudioTrack thread when bufferSize change is occuring ?

@philburk philburk assigned gkasten and unassigned philburk Dec 30, 2019
@philburk
Copy link
Collaborator

philburk commented Jan 28, 2020

The assert in releaseBuffer() seems to be caused by a race condition.
The sequence of events is:

  1. audio callback starts by obtaining a buffer from the device
  2. app is called to render audio
  3. device routing change occurs during the callback
  4. callback ends by releasing the buffer back to the new device

But the new device sees that the buffer came from the old device and rejects it. That causes the assert and the crash.

If the callback is very quick, then the window is small and the crash may not occur.
If the callback is taking a long time because the computation is complex, then the window
is wide and the probability very high for hitting this bug.

@philburk
Copy link
Collaborator

We are looking for a workaround.

@taemincho - you said the probablility went down when you switched to your own OpenSL ES implementation. What is different between your code and ours?

  1. Are you using SLAndroidBufferQueueItf or SLAndroidSimpleBufferQueueItf? We use SLAndroidSimpleBufferQueueItf in AudioStreamOpenSLES::registerBufferQueueCallback().

  2. How many buffers do you queue? oboe kBufferQueueLength = 2

  3. What is the size of each buffer in frames?

@taemincho
Copy link
Author

@philburk

  1. We also use SLAndroidSimpleBufferQueueItf
  2. We use 2 buffers
  3. We use the native buffer size reported by getProperty(String) for PROPERTY_OUTPUT_FRAMES_PER_BUFFER.

@Piasy
Copy link

Piasy commented Mar 2, 2020

I also encounter this issue, in our crash report system, all devices are Android 10, I checked other reports, it happens on Android 8 (1 time), 9 (5 times) and 10 (17 times).

Below are Android 10 devices:

HUAWEI  LIO AN00	Android 10,level 29
HUAWEI  ELE AL00	Android 10,level 29
HUAWEI  EVR AL00	Android 10,level 29
HUAWEI  VOG AL00	Android 10,level 29
HUAWEI  VOG AL10	Android 10,level 29
HUAWEI  LYA AL00	Android 10,level 29
HUAWEI  TAS AN00	Android 10,level 29
HUAWEI  LYA AL10	Android 10,level 29
HUAWEI  EML AL00	Android 10,level 29
HUAWEI  ALP AL00	Android 10,level 29
HUAWEI HONOR  PCT AL10	Android 10,level 29
XIAOMI  MI 9	Android 10,level 29

And I checked logs of HUAWEI LIO AN00 Android 10,level 29, the crash did happen when user connected bluetooth, and here is the abort log:

03-01 18:00:59.217 26443 27990 W AudioTrack: createTrack_l(1966): AUDIO_OUTPUT_FLAG_FAST denied by server; frameCount 3848 -> 3848
03-01 18:00:59.236 26443 27990 F AudioTrackShared: releaseBuffer: mUnreleased out of range, !(stepCount:240 <= mUnreleased:0 <= mFrameCount:3848), BufferSizeInFrames:3848

BTW, I'm using the OpenSLES mode.

@peterdk
Copy link

peterdk commented Apr 5, 2020

Seeing regularly crashes I think related to this. Using AAudio, not through Oboe. But ofcourse also interested in a workaround or solution. Could using mutexes in the callback and the shutdown logic
help for example?

backtrace:
  #00  pc 000000000002216c  /system/lib64/libc.so (abort+116)
  #01  pc 0000000000008644  /system/lib64/liblog.so (__android_log_assert+296)
  #02  pc 0000000000079814  /system/lib64/libaudioclient.so (android::AudioTrack::releaseBuffer(android::AudioTrack::Buffer const*)+644)
  #03  pc 00000000000788ec  /system/lib64/libaudioclient.so (android::AudioTrack::processAudioBuffer()+4112)
  #04  pc 00000000000775cc  /system/lib64/libaudioclient.so (android::AudioTrack::AudioTrackThread::threadLoop()+252)
  #05  pc 0000000000010074  /system/lib64/libutils.so (android::Thread::_threadLoop(void*)+180)
  #06  pc 0000000000090328  /system/lib64/libc.so (__pthread_start(void*)+36) 
  #07  pc 0000000000023a28  /system/lib64/libc.so (__start_thread+68)
````

@chenposun
Copy link

Has there been any updates to this issue?

We are also seeing this this callstack as one of our most common crashes on play store.

@philburk
Copy link
Collaborator

philburk commented May 19, 2020

I have added a techNote dedicated to this Issue. Check there for the latest status.
https://github.com/google/oboe/wiki/TechNote_ReleaseBuffer

@peterdk It is rare for this crash to occur when using AAudio because the routing is handled differently. I would love to get more information:

  1. What device and OS versions are you seeing this on.
  2. What performanceMode are you using?
  3. Are you using a callback or the blocking write()?
  4. Can you reproduce it in your office?

[UPDATE: Calling getFramesRead() from the callback can trigger this for AAudio!]

philburk added a commit that referenced this issue May 20, 2020
Move call to getPosition() outside the callback.
This was triggering a restoreTrack_l() inside
AudioFlinger folowing a headset insertion.
That in turn caused an assert in releaseBuffer() in
AudioTrack or AudioRecord.

Now it is called when needed by getFramesRead() or getFramesWritten().

Fixes ##535
@philburk
Copy link
Collaborator

I think we found a workaround for this crash in releaseBuffer() for OpenSL ES.
The oboe callback was indirectly calling getPosition(),
which triggered a rerouting of the track in AudioFlinger.
If that happens during the callback then releaseBuffer() might assert.
The likelihood of the assert increased as the CPU workload in the callback increased.

Oboe version 1.4.1 has the fix for OpenSL ES in #863.

philburk added a commit that referenced this issue May 20, 2020
Move call to getPosition() outside the callback.
This was triggering a restoreTrack_l() inside
AudioFlinger folowing a headset insertion.
That in turn caused an assert in releaseBuffer() in
AudioTrack or AudioRecord.

Now it is called when needed by getFramesRead() or getFramesWritten().

Fixes ##535
@philburk
Copy link
Collaborator

I did an experiment. Calling getFramesRead() from the callback of a Legacy Oboe OUTPUT stream
can trigger this bug. I haven't tried it but calling getTimestamp() should also trigger the bug.

@xuchuanyin
Copy link

Also encountered this prblem and got the following log:

05-27 11:30:29.397 18129 18129 I crash_dump64: performing dump of process 15676 (target tid = 15708)
05-27 11:30:29.400 15676 15682 I awei.vassistan: jit_compiled:[OK] int android.content.res.XmlBlock$Parser.next() @ /system/framework/framework.jar
05-27 11:30:29.404   699  1163 I asd_out : L = 953539 29.277771db, R = 953539 29.277771db
05-27 11:30:29.407  1374  1911 I AwareLog: RMS.AwareAppAssociate: [addWindow]:15676 [mode]:0 [code]:65358121 width:-1 height:-2 alpha:1.0
05-27 11:30:29.407  1374  1911 I AwareLog: RMS.AwareAppAssociate: [addWindow]:15676 [mode]:0 [code]:65358121 isEvil:false
05-27 11:30:29.408  1374 13332 V WindowManager: addWindow: Window{3e54929 u0 com.huawei.vassistant}
05-27 11:30:29.409 15676 15676 I ViewRootImpl[]: first performTraversals
05-27 11:30:29.409    53    53 W migration/7: type=1400 audit(0.0:174321): avc: granted { setsched } for pid=53 scontext=u:r:kernel:s0 tcontext=u:r:kernel:s0 tclass=process
05-27 11:30:29.413 15676 15676 I ViewRootImpl[]: relayoutWindow
05-27 11:30:29.414 15676 15682 I awei.vassistan: jit_compiled:[OK] boolean android.graphics.RenderNode.setOutline(android.graphics.Outline) @ /system/framework/framework.jar
05-27 11:30:29.414  1374 13332 I WindowManager_visibility: start Relayout Window{3e54929 u0 com.huawei.vassistant} oldVis=0 newVis=0 width=1080 height=1074
05-27 11:30:29.414  1374  1911 I AwareLog: RMS.AwareAppAssociate: [updateWindow]:15676 [mode]:3 [code]:65358121 width:1080 height:1074 alpha:1.0 hasSurface:false
05-27 11:30:29.414  1374  1911 I AwareLog: RMS.AwareAppAssociate: [updateWindow]:15676 [mode]:3 [code]:65358121 isEvil:true
05-27 11:30:29.416  1374 13332 I WindowManager_visibility: DrawState change: Window{3e54929 u0 com.huawei.vassistant} from:NO_SURFACE to:DRAW_PENDING reason:createSurface
05-27 11:30:29.419 18129 18129 F DEBUG   : *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
05-27 11:30:29.419 18129 18129 F DEBUG   : Build fingerprint: 'HUAWEI/ELE-AL00/HWELE:10/HUAWEIELE-AL00/10.0.0.68C00:user/release-keys'
05-27 11:30:29.419 18129 18129 F DEBUG   : Revision: '0'
05-27 11:30:29.419 18129 18129 F DEBUG   : ABI: 'arm64'
05-27 11:30:29.419 18129 18129 E libc    : Access denied finding property "persist.mygote.disable"
05-27 11:30:29.419 18129 18129 E libc    : Access denied finding property "persist.mygote.escape.enable"
05-27 11:30:29.420 18129 18129 F DEBUG   : SYSVMTYPE: Maple
05-27 11:30:29.420 18129 18129 F DEBUG   : APPVMTYPE: Art
05-27 11:30:29.420 18129 18129 F DEBUG   : Timestamp: 2020-05-27 11:30:29+0800
05-27 11:30:29.420 18129 18129 F DEBUG   : pid: 15676, tid: 15708, name: Caption-AudioAc  >>> com.huawei.vassistant <<<
05-27 11:30:29.420 18129 18129 F DEBUG   : uid: 10101
05-27 11:30:29.420 18129 18129 F DEBUG   : signal 6 (SIGABRT), code -1 (SI_QUEUE), fault addr --------
05-27 11:30:29.420 18129 18129 F DEBUG   : Abort message: 'releaseBuffer: mUnreleased out of range, !(stepCount:26 <= mUnreleased:0 <= mFrameCount:3200), BufferSizeInFrames:3200'
05-27 11:30:29.420 18129 18129 F DEBUG   :     x0  0000000000000000  x1  0000000000003d5c  x2  0000000000000006  x3  0000007c252982f0
05-27 11:30:29.420 18129 18129 F DEBUG   :     x4  0000000000000000  x5  0000000000000000  x6  0000000000000000  x7  7f7f7f7f7f7f7f7f
05-27 11:30:29.420 18129 18129 F DEBUG   :     x8  00000000000000f0  x9  de1a9ecde05b3b67  x10 0000000000000001  x11 0000000000000000
05-27 11:30:29.420 18129 18129 F DEBUG   :     x12 fffffff0fffffbdf  x13 0000000000000001  x14 0000000000000002  x15 073733060f0a1a02
05-27 11:30:29.420 18129 18129 F DEBUG   :     x16 0000007d1a1f2908  x17 0000007d1a1d21b0  x18 0000007c24f5c000  x19 0000000000003d3c
05-27 11:30:29.420 18129 18129 F DEBUG   :     x20 0000000000003d5c  x21 00000000ffffffff  x22 0000007c2ce9d2aa  x23 0000007c2cb0e318
05-27 11:30:29.420 18129 18129 F DEBUG   :     x24 0000007d19e658e8  x25 0000000000000034  x26 0000007c86530840  x27 0000007c2529a020
05-27 11:30:29.420 18129 18129 F DEBUG   :     x28 0000007d19eef360  x29 0000007c25298390
05-27 11:30:29.420 18129 18129 F DEBUG   :     sp  0000007c252982d0  lr  0000007d1a187040  pc  0000007d1a18706c
05-27 11:30:29.431  1374  1911 I AwareLog: RMS.AwareAppAssociate: [updateWindow]:15676 [mode]:3 [code]:65358121 width:-1 height:-1 alpha:1.0 hasSurface:true
05-27 11:30:29.431  1374  1911 I AwareLog: RMS.AwareAppAssociate: [updateWindow]:15676 [mode]:3 [code]:65358121 isEvil:fals

@philburk
Copy link
Collaborator

philburk commented May 27, 2020

@xuchuanyin - Please use Oboe v1.4.1 or later. It has a fix.

See https://github.com/google/oboe/wiki/TechNote_ReleaseBuffer
for more information.

@triplef
Copy link

triplef commented Oct 15, 2021

We’re using Oboe v1.6.1 and still seeing this crash quite frequently. Is there anything else we should be doing to avoid it?

We don’t call stream->getFramesRead(), but only stream->getTimestamp(), but not from any callback.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug P1 high priority
Projects
None yet
Development

No branches or pull requests