Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ringbuffer of dlt-daemon only record the old log messsages #533

Closed
cumtsmart opened this issue Sep 4, 2023 · 10 comments
Closed

ringbuffer of dlt-daemon only record the old log messsages #533

cumtsmart opened this issue Sep 4, 2023 · 10 comments

Comments

@cumtsmart
Copy link

cumtsmart commented Sep 4, 2023

In my opinion, the ringbuffer of dlt-daemon will record the latest dlt log if no dlt-client connect. when the buffer is full, the new coming log will squeeze out the oldest one, this's what ringbuffer means. But i found ringbuffer just drop the new coming dlt log when buffer is full.

@cumtsmart cumtsmart changed the title ringbuffer of dlt-daemon only record the old logs ringbuffer of dlt-daemon only record the old log messsages Sep 4, 2023
@cumtsmart
Copy link
Author

cumtsmart commented Sep 4, 2023

Logic is in dlt_daemon_client.c:

/* Message was not sent to client, so store it in client ringbuffer */
if ((sock != DLT_DAEMON_SEND_FORCE) &&
    ((daemon->state == DLT_DAEMON_STATE_BUFFER) || (daemon->state == DLT_DAEMON_STATE_SEND_BUFFER) ||
     (daemon->state == DLT_DAEMON_STATE_BUFFER_FULL))) {
    if (daemon->state != DLT_DAEMON_STATE_BUFFER_FULL) {
        /* Store message in history buffer */
        ret = dlt_buffer_push3(&(daemon->client_ringbuffer), data1, size1, data2, size2, 0, 0);
        if (ret < DLT_RETURN_OK) {
            dlt_daemon_change_state(daemon, DLT_DAEMON_STATE_BUFFER_FULL);
        }
    }
    if (daemon->state == DLT_DAEMON_STATE_BUFFER_FULL) {
        daemon->overflow_counter += 1;
        if (daemon->overflow_counter == 1)
            dlt_vlog(LOG_INFO, "%s: Buffer is full! Messages will be discarded.\n", __func__);

        return DLT_DAEMON_ERROR_BUFFER_FULL;
    }
} else {
    if ((daemon->overflow_counter > 0) &&
        (daemon_local->client_connections > 0)) {
        sent_message_overflow_cnt++;
        if (sent_message_overflow_cnt >= 2) {
            sent_message_overflow_cnt--;
        }
        else {
            if (dlt_daemon_send_message_overflow(daemon, daemon_local,
                                      verbose) == DLT_DAEMON_ERROR_OK) {
                dlt_vlog(LOG_WARNING,
                         "%s: %u messages discarded! Now able to send messages to the client.\n",
                         __func__,
                         daemon->overflow_counter);
                daemon->overflow_counter = 0;
                sent_message_overflow_cnt--;
            }
        }
    }
}

@minminlittleshrimp
Copy link
Collaborator

minminlittleshrimp commented Sep 6, 2023

Hello @cumtsmart ,
thanks for your interest in DLT ring buffer.

I would like to briefly go through some explanation about DLT daemon ring buffer and code logic in dlt-daemon-client-send:
DLT daemon will switch to buffer full state after reaching max ring buffer size. The log info about discarded messages actually about the old messages.

[20350.638744]~DLT~29161~DEBUG    ~BufferLength=424, HeaderSize=42, DataSize=20
[20350.638746]~DLT~29161~DEBUG    ~BufferLength=370, HeaderSize=42, DataSize=20
[20350.638749]~DLT~29161~DEBUG    ~BufferLength=316, HeaderSize=42, DataSize=20
[20350.638751]~DLT~29161~DEBUG    ~BufferLength=262, HeaderSize=42, DataSize=20
[20350.638754]~DLT~29161~DEBUG    ~BufferLength=208, HeaderSize=42, DataSize=20
[20350.638756]~DLT~29161~DEBUG    ~BufferLength=154, HeaderSize=42, DataSize=20
[20350.638759]~DLT~29161~DEBUG    ~BufferLength=100, HeaderSize=42, DataSize=20
[20350.638762]~DLT~29161~DEBUG    ~BufferLength=46, HeaderSize=42, DataSize=20
[20350.645873]~DLT~29161~DEBUG    ~BufferLength=46, HeaderSize=42, DataSize=20
[20351.147585]~DLT~29161~DEBUG    ~BufferLength=46, HeaderSize=42, DataSize=20
[20351.651632]~DLT~29161~DEBUG    ~BufferLength=46, HeaderSize=42, DataSize=20
[20352.152892]~DLT~29161~DEBUG    ~BufferLength=46, HeaderSize=42, DataSize=20
[20352.152896]~DLT~29161~INFO     ~Switched to buffer full state.
[20352.152897]~DLT~29161~INFO     ~dlt_daemon_client_send: Buffer is full! Messages will be discarded.
[20352.653984]~DLT~29161~DEBUG    ~BufferLength=46, HeaderSize=42, DataSize=20
[20352.740147]~DLT~29161~INFO     ~dlt_daemon_send_ringbuffer_to_client()
[20352.740152]~DLT~29161~DEBUG    ~Can't send contents of ring buffer to clients
[20353.154545]~DLT~29161~DEBUG    ~BufferLength=46, HeaderSize=42, DataSize=20
[20353.655243]~DLT~29161~DEBUG    ~BufferLength=46, HeaderSize=42, DataSize=20
[20353.740418]~DLT~29161~INFO     ~dlt_daemon_send_ringbuffer_to_client()
[20353.740424]~DLT~29161~DEBUG    ~Can't send contents of ring buffer to clients
[20354.159856]~DLT~29161~DEBUG    ~BufferLength=46, HeaderSize=42, DataSize=20
[20354.660597]~DLT~29161~DEBUG    ~BufferLength=46, HeaderSize=42, DataSize=20
[20354.739157]~DLT~29161~INFO     ~dlt_daemon_send_ringbuffer_to_client()
[20354.739162]~DLT~29161~DEBUG    ~Can't send contents of ring buffer to clients
[20355.164794]~DLT~29161~DEBUG    ~BufferLength=46, HeaderSize=42, DataSize=20
[20355.665764]~DLT~29161~DEBUG    ~BufferLength=46, HeaderSize=42, DataSize=20

I agree that the log message here is not clear, maybe "Discard old messages" would be better.
How to check:

In dlt.conf set:

Verbose = 1
RingbufferMinSize = 500
RingbufferMaxSize = 10000
RingbufferStepSize = 1000
$ dlt-daemon
$ dlt-test-stress-user -n 10000 -d 100000

Wait for a couple of minutes and receive logs, dlt is writing new logs over old logs in its ring buffer:

$ dlt-receive -a localhost
2023/09/07 13:19:45.605695   36559938 031 ECU1 DIFT INFO log info V 2 [288 00'01'02'03'04'05'06'07'08'09'0a'0b'0c'0d'0e'0f'10'11'12'13'14'15'16'17'18'19'1a'1b'1c'1d'1e'1f'20'21'22'23'24'25'26'27'28'29'2a'2b'2c'2d'2e'2f'30'31'32'33'34'35'36'37'38'39'3a'3b'3c'3d'3e'3f'40'41'42'43'44'45'46'47'48'49'4a'4b'4c'4d'4e'4f'50'51'52'53'54'55'56'57'58'59'5a'5b'5c'5d'5e'5f'60'61'62'63]
2023/09/07 13:19:45.605726   36560939 032 ECU1 DIFT INFO log info V 2 [289 00'01'02'03'04'05'06'07'08'09'0a'0b'0c'0d'0e'0f'10'11'12'13'14'15'16'17'18'19'1a'1b'1c'1d'1e'1f'20'21'22'23'24'25'26'27'28'29'2a'2b'2c'2d'2e'2f'30'31'32'33'34'35'36'37'38'39'3a'3b'3c'3d'3e'3f'40'41'42'43'44'45'46'47'48'49'4a'4b'4c'4d'4e'4f'50'51'52'53'54'55'56'57'58'59'5a'5b'5c'5d'5e'5f'60'61'62'63]
2023/09/07 13:19:45.605753   36561940 033 ECU1 DIFT INFO log info V 2 [290 00'01'02'03'04'05'06'07'08'09'0a'0b'0c'0d'0e'0f'10'11'12'13'14'15'16'17'18'19'1a'1b'1c'1d'1e'1f'20'21'22'23'24'25'26'27'28'29'2a'2b'2c'2d'2e'2f'30'31'32'33'34'35'36'37'38'39'3a'3b'3c'3d'3e'3f'40'41'42'43'44'45'46'47'48'49'4a'4b'4c'4d'4e'4f'50'51'52'53'54'55'56'57'58'59'5a'5b'5c'5d'5e'5f'60'61'62'63]
2023/09/07 13:19:45.605781   36562941 034 ECU1 DIFT INFO log info V 2 [291  00'01'02'03'04'05'06'07'08'09'0a'0b'0c'0d'0e'0f'10'11'12'13'14'15'16'17'18'19'1a'1b'1c'1d'1e'1f'20'21'22'23'24'25'26'27'28'29'2a'2b'2c'2d'2e'2f'30'31'32'33'34'35'36'37'38'39'3a'3b'3c'3d'3e'3f'40'41'42'43'44'45'46'47'48'49'4a'4b'4c'4d'4e'4f'50'51'52'53'54'55'56'57'58'59'5a'5b'5c'5d'5e'5f'60'61'62'63]
2023/09/07 13:19:45.605807   36757744 000 ECU1 DA1- DC1- control response N 1 [message_buffer_overflow, ok, 01 c2 00 00 00]
2023/09/07 13:19:45.605829   36757799 228 ECU1 DIFT INFO log info V 2 [485 00'01'02'03'04'05'06'07'08'09'0a'0b'0c'0d'0e'0f'10'11'12'13'14'15'16'17'18'19'1a'1b'1c'1d'1e'1f'20'21'22'23'24'25'26'27'28'29'2a'2b'2c'2d'2e'2f'30'31'32'33'34'35'36'37'38'39'3a'3b'3c'3d'3e'3f'40'41'42'43'44'45'46'47'48'49'4a'4b'4c'4d'4e'4f'50'51'52'53'54'55'56'57'58'59'5a'5b'5c'5d'5e'5f'60'61'62'63]
2023/09/07 13:19:45.667279   36758801 229 ECU1 DIFT INFO log info V 2 [486 00'01'02'03'04'05'06'07'08'09'0a'0b'0c'0d'0e'0f'10'11'12'13'14'15'16'17'18'19'1a'1b'1c'1d'1e'1f'20'21'22'23'24'25'26'27'28'29'2a'2b'2c'2d'2e'2f'30'31'32'33'34'35'36'37'38'39'3a'3b'3c'3d'3e'3f'40'41'42'43'44'45'46'47'48'49'4a'4b'4c'4d'4e'4f'50'51'52'53'54'55'56'57'58'59'5a'5b'5c'5d'5e'5f'60'61'62'63]
2023/09/07 13:19:45.767605   36759804 230 ECU1 DIFT INFO log info V 2 [487 00'01'02'03'04'05'06'07'08'09'0a'0b'0c'0d'0e'0f'10'11'12'13'14'15'16'17'18'19'1a'1b'1c'1d'1e'1f'20'21'22'23'24'25'26'27'28'29'2a'2b'2c'2d'2e'2f'30'31'32'33'34'35'36'37'38'39'3a'3b'3c'3d'3e'3f'40'41'42'43'44'45'46'47'48'49'4a'4b'4c'4d'4e'4f'50'51'52'53'54'55'56'57'58'59'5a'5b'5c'5d'5e'5f'60'61'62'63]
2023/09/07 13:19:45.867780   36760806 231 ECU1 DIFT INFO log info V 2 [488 00'01'02'03'04'05'06'07'08'09'0a'0b'0c'0d'0e'0f'10'11'12'13'14'15'16'17'18'19'1a'1b'1c'1d'1e'1f'20'21'22'23'24'25'26'27'28'29'2a'2b'2c'2d'2e'2f'30'31'32'33'34'35'36'37'38'39'3a'3b'3c'3d'3e'3f'40'41'42'43'44'45'46'47'48'49'4a'4b'4c'4d'4e'4f'50'51'52'53'54'55'56'57'58'59'5a'5b'5c'5d'5e'5f'60'61'62'63]
2023/09/07 13:19:45.968059   36761809 232 ECU1 DIFT INFO log info V 2 [489 00'01'02'03'04'05'06'07'08'09'0a'0b'0c'0d'0e'0f'10'11'12'13'14'15'16'17'18'19'1a'1b'1c'1d'1e'1f'20'21'22'23'24'25'26'27'28'29'2a'2b'2c'2d'2e'2f'30'31'32'33'34'35'36'37'38'39'3a'3b'3c'3d'3e'3f'40'41'42'43'44'45'46'47'48'49'4a'4b'4c'4d'4e'4f'50'51'52'53'54'55'56'57'58'59'5a'5b'5c'5d'5e'5f'60'61'62'63]
2023/09/07 13:19:46.068191   36762810 233 ECU1 DIFT INFO log info V 2 [490 00'01'02'03'04'05'06'07'08'09'0a'0b'0c'0d'0e'0f'10'11'12'13'14'15'16'17'18'19'1a'1b'1c'1d'1e'1f'20'21'22'23'24'25'26'27'28'29'2a'2b'2c'2d'2e'2f'30'31'32'33'34'35'36'37'38'39'3a'3b'3c'3d'3e'3f'40'41'42'43'44'45'46'47'48'49'4a'4b'4c'4d'4e'4f'50'51'52'53'54'55'56'57'58'59'5a'5b'5c'5d'5e'5f'60'61'62'63]
2023/09/07 13:19:46.168483   36763813 234 ECU1 DIFT INFO log info V 2 [491 00'01'02'03'04'05'06'07'08'09'0a'0b'0c'0d'0e'0f'10'11'12'13'14'15'16'17'18'19'1a'1b'1c'1d'1e'1f'20'21'22'23'24'25'26'27'28'29'2a'2b'2c'2d'2e'2f'30'31'32'33'34'35'36'37'38'39'3a'3b'3c'3d'3e'3f'40'41'42'43'44'45'46'47'48'49'4a'4b'4c'4d'4e'4f'50'51'52'53'54'55'56'57'58'59'5a'5b'5c'5d'5e'5f'60'61'62'63]
2023/09/07 13:19:46.268708   36764816 235 ECU1 DIFT INFO log info V 2 [492 00'01'02'03'04'05'06'07'08'09'0a'0b'0c'0d'0e'0f'10'11'12'13'14'15'16'17'18'19'1a'1b'1c'1d'1e'1f'20'21'22'23'24'25'26'27'28'29'2a'2b'2c'2d'2e'2f'30'31'32'33'34'35'36'37'38'39'3a'3b'3c'3d'3e'3f'40'41'42'43'44'45'46'47'48'49'4a'4b'4c'4d'4e'4f'50'51'52'53'54'55'56'57'58'59'5a'5b'5c'5d'5e'5f'60'61'62'63]
2023/09/07 13:19:46.368808   36765817 236 ECU1 DIFT INFO log info V 2 [493 00'01'02'03'04'05'06'07'08'09'0a'0b'0c'0d'0e'0f'10'11'12'13'14'15'16'17'18'19'1a'1b'1c'1d'1e'1f'20'21'22'23'24'25'26'27'28'29'2a'2b'2c'2d'2e'2f'30'31'32'33'34'35'36'37'38'39'3a'3b'3c'3d'3e'3f'40'41'42'43'44'45'46'47'48'49'4a'4b'4c'4d'4e'4f'50'51'52'53'54'55'56'57'58'59'5a'5b'5c'5d'5e'5f'60'61'62'63]
2023/09/07 13:19:46.471231   36766841 237 ECU1 DIFT INFO log info V 2 [494 00'01'02'03'04'05'06'07'08'09'0a'0b'0c'0d'0e'0f'10'11'12'13'14'15'16'17'18'19'1a'1b'1c'1d'1e'1f'20'21'22'23'24'25'26'27'28'29'2a'2b'2c'2d'2e'2f'30'31'32'33'34'35'36'37'38'39'3a'3b'3c'3d'3e'3f'40'41'42'43'44'45'46'47'48'49'4a'4b'4c'4d'4e'4f'50'51'52'53'54'55'56'57'58'59'5a'5b'5c'5d'5e'5f'60'61'62'63]
2023/09/07 13:19:46.575499   36767883 238 ECU1 DIFT INFO log info V 2 [495 00'01'02'03'04'05'06'07'08'09'0a'0b'0c'0d'0e'0f'10'11'12'13'14'15'16'17'18'19'1a'1b'1c'1d'1e'1f'20'21'22'23'24'25'26'27'28'29'2a'2b'2c'2d'2e'2f'30'31'32'33'34'35'36'37'38'39'3a'3b'3c'3d'3e'3f'40'41'42'43'44'45'46'47'48'49'4a'4b'4c'4d'4e'4f'50'51'52'53'54'55'56'57'58'59'5a'5b'5c'5d'5e'5f'60'61'62'63]
2023/09/07 13:19:46.679160   36768920 239 ECU1 DIFT INFO log info V 2 [496 00'01'02'03'04'05'06'07'08'09'0a'0b'0c'0d'0e'0f'10'11'12'13'14'15'16'17'18'19'1a'1b'1c'1d'1e'1f'20'21'22'23'24'25'26'27'28'29'2a'2b'2c'2d'2e'2f'30'31'32'33'34'35'36'37'38'39'3a'3b'3c'3d'3e'3f'40'41'42'43'44'45'46'47'48'49'4a'4b'4c'4d'4e'4f'50'51'52'53'54'55'56'57'58'59'5a'5b'5c'5d'5e'5f'60'61'62'63]
2023/09/07 13:19:46.779323   36769922 240 ECU1 DIFT INFO log info V 2 [497 00'01'02'03'04'05'06'07'08'09'0a'0b'0c'0d'0e'0f'10'11'12'13'14'15'16'17'18'19'1a'1b'1c'1d'1e'1f'20'21'22'23'24'25'26'27'28'29'2a'2b'2c'2d'2e'2f'30'31'32'33'34'35'36'37'38'39'3a'3b'3c'3d'3e'3f'40'41'42'43'44'45'46'47'48'49'4a'4b'4c'4d'4e'4f'50'51'52'53'54'55'56'57'58'59'5a'5b'5c'5d'5e'5f'60'61'62'63]
2023/09/07 13:19:46.884148   36770969 241 ECU1 DIFT INFO log info V 2 [498 00'01'02'03'04'05'06'07'08'09'0a'0b'0c'0d'0e'0f'10'11'12'13'14'15'16'17'18'19'1a'1b'1c'1d'1e'1f'20'21'22'23'24'25'26'27'28'29'2a'2b'2c'2d'2e'2f'30'31'32'33'34'35'36'37'38'39'3a'3b'3c'3d'3e'3f'40'41'42'43'44'45'46'47'48'49'4a'4b'4c'4d'4e'4f'50'51'52'53'54'55'56'57'58'59'5a'5b'5c'5d'5e'5f'60'61'62'63]
2023/09/07 13:19:46.984441   36771973 242 ECU1 DIFT INFO log info V 2 [499 00'01'02'03'04'05'06'07'08'09'0a'0b'0c'0d'0e'0f'10'11'12'13'14'15'16'17'18'19'1a'1b'1c'1d'1e'1f'20'21'22'23'24'25'26'27'28'29'2a'2b'2c'2d'2e'2f'30'31'32'33'34'35'36'37'38'39'3a'3b'3c'3d'3e'3f'40'41'42'43'44'45'46'47'48'49'4a'4b'4c'4d'4e'4f'50'51'52'53'54'55'56'57'58'59'5a'5b'5c'5d'5e'5f'60'61'62'63]
2023/09/07 13:19:47.084620   36772974 243 ECU1 DIFT INFO log info V 2 [500 00'01'02'03'04'05'06'07'08'09'0a'0b'0c'0d'0e'0f'10'11'12'13'14'15'16'17'18'19'1a'1b'1c'1d'1e'1f'20'21'22'23'24'25'26'27'28'29'2a'2b'2c'2d'2e'2f'30'31'32'33'34'35'36'37'38'39'3a'3b'3c'3d'3e'3f'40'41'42'43'44'45'46'47'48'49'4a'4b'4c'4d'4e'4f'50'51'52'53'54'55'56'57'58'59'5a'5b'5c'5d'5e'5f'60'61'62'63]

Regards

@cumtsmart
Copy link
Author

cumtsmart commented Sep 7, 2023

@minminlittleshrimp
Thanks for your reply.
But in my test, it's not "Discard old messages". Ringbuffer just drop the new coming dlt log when buffer is full.
I set RingbufferMaxSize = 30M, and wait for 30mins for ringbuffer to get full. the connect with dlt-viewer. dlt-viewer will get ringbuffer data first and then receive real time dlt log. But there is a time gap between ringbuffer log and real time log.
I think if ringbuffer always record the latest log, the Timestamp gap should not exist.

dlt

@minminlittleshrimp
Copy link
Collaborator

minminlittleshrimp commented Sep 7, 2023

Hello @cumtsmart

We need to differentiate between dlt ring buffer and libdlt buffer.
If dlt-daemon is not available, new messages will be discarded at libdlt buffer.
Hence, dlt-daemon must be available for message overwriting.
Check:

$ dlt-test-stress-user -n 10000 -r 100
[ 8442.463761]~DLT~25354~INFO     ~FIFO /tmp/dlt cannot be opened. Retrying later...
Tests starting
[ 8469.520990]~DLT~25354~WARNING  ~Buffer full! Messages will be discarded.

This in fact a WARNING message, not an INFO like dlt-daemon.
After dlt-daemon is on:

$ dlt-test-stress-user -n 10000 -r 100
[ 8442.463761]~DLT~25354~INFO     ~FIFO /tmp/dlt cannot be opened. Retrying later...
Tests starting
[ 8469.520990]~DLT~25354~WARNING  ~Buffer full! Messages will be discarded.
[ 8533.749954]~DLT~25354~NOTICE   ~Logging (re-)enabled!
[ 8533.750772]~DLT~25354~WARNING  ~30636 messages discarded!

The discarded messages here are New messages coming when libdlt buffer is full.
Honestly, this is our mechanism for dlt, so please try again with dlt-daemon avaiable on host before any applications flush logs.
If you have any idea or proposal for this, please feel free to contribute to dlt.

Thank you

@cumtsmart
Copy link
Author

cumtsmart commented Sep 7, 2023

@minminlittleshrimp

I mean the ringbuffer on dlt-daemon size, not libdlt buffer. And my dlt-daemon is also started up.

My attention is focused on dlt_daemon_client.c::dlt_daemon_client_send. In this function, will invoke

    /* Store message in history buffer */
    ret = dlt_buffer_push3(&(daemon->client_ringbuffer), data1, (unsigned int) size1, data2, (unsigned int) size2, 0, 0);

dlt_buffer_push3 method store dlt message in dlt-daemon's proccess ringbuffer.

when a dlt-client connected, dlt message in dlt-daemon's proccess ringbuffer will sync to dlt-client.

Is my understanding correct?

@minminlittleshrimp
Copy link
Collaborator

minminlittleshrimp commented Sep 7, 2023

Hello @cumtsmart
I just check again in the source code.
Basically I could see both libdlt and dlt-daemon using the same API for buffering.
Also I tested following your report, in fact the new logs are discarded:

Ringbuffer configuration: 500/10000/500

$ dlt-test-stress-user -n 10000 -d 10 -r 1
Tests starting
Tests finished

$ dlt-receive -a localhost
2023/09/07 17:18:05.980989  179710580 060 ECU1 DIFT INFO log info V 2 [61 00'01'02'03'04'05'06'07'08'09'0a'0b'0c'0d'0e'0f'10'11'12'13'14'15'16'17'18'19'1a'1b'1c'1d'1e'1f'20'21'22'23'24'25'26'27'28'29'2a'2b'2c'2d'2e'2f'30'31'32'33'34'35'36'37'38'39'3a'3b'3c'3d'3e'3f'40'41'42'43'44'45'46'47'48'49'4a'4b'4c'4d'4e'4f'50'51'52'53'54'55'56'57'58'59'5a'5b'5c'5d'5e'5f'60'61'62'63]
2023/09/07 17:18:05.980997  179710581 061 ECU1 DIFT INFO log info V 2 [62 00'01'02'03'04'05'06'07'08'09'0a'0b'0c'0d'0e'0f'10'11'12'13'14'15'16'17'18'19'1a'1b'1c'1d'1e'1f'20'21'22'23'24'25'26'27'28'29'2a'2b'2c'2d'2e'2f'30'31'32'33'34'35'36'37'38'39'3a'3b'3c'3d'3e'3f'40'41'42'43'44'45'46'47'48'49'4a'4b'4c'4d'4e'4f'50'51'52'53'54'55'56'57'58'59'5a'5b'5c'5d'5e'5f'60'61'62'63]
2023/09/07 17:18:05.981006  179761457 000 ECU1 DA1- DC1- control response N 1 [message_buffer_overflow, ok, 01 d6 26 00 00]

I could say whenever this control response log appears to indicate the overflow: message_buffer_overflow, no new logs are buffered. Let me take a deeper understanding of this behavior.

Regards

@minminlittleshrimp
Copy link
Collaborator

minminlittleshrimp commented Sep 7, 2023

Hello @cumtsmart

Case: No dlt client available
I observed in the source code and by doing some tests with continuously increasing the buffer.
What I have found is that this ring buffer is designed to use as producer/consumer mode (write faster than read, and in fact in this case read = 0 till buffer full):

new_head->read = 0;

new_head->read = 0;

,in which it means the lost of recent logs. No overwriting process there but only a checking for full buffer during while loop of writing:
if ((buf->size + sizeof(DltBufferHead) + buf->step_size) > buf->max_size)

Hence, the mechanism of dlt ring buffers:

  • Both dlt-daemon and libdlt are using their own ring buffers.
  • Ring buffers in dlt-daemon and libdlt are all producer/consumer mode -> new logs will be discarded.

In a nutshell, we need to correct the document instead 😀

Hello @michael-methner , please kindly check my points.
Thank you in advance.

@minminlittleshrimp
Copy link
Collaborator

minminlittleshrimp commented Sep 7, 2023

In fact, we do not need to correct the doc:
https://github.com/COVESA/dlt-daemon/blob/master/doc/dlt_design_specification.md#overflow-handling

,,If the named pipe/socket of the DLT daemon is full, an overflow flag is set and the message stored in a ring buffer. The next time, a message could be sent to the DLT daemon, an overflow message is sent first, then the contents of the ring buffer. If sending of this message was possible, the overflow flag is reset.''

No overwriting behavior mention, only store old messages and turn on the full flag.

@cumtsmart
Copy link
Author

@minminlittleshrimp
Ok, thanks for your time.😀

@michael-methner
Copy link
Collaborator

not a bug

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants