Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Thanos Store Panic #6934

Open
nicolastakashi opened this issue Nov 27, 2023 · 21 comments
Open

Thanos Store Panic #6934

nicolastakashi opened this issue Nov 27, 2023 · 21 comments

Comments

@nicolastakashi
Copy link
Contributor

nicolastakashi commented Nov 27, 2023

Thanos, Prometheus and Golang version used:

Thanos: 0.32.5
Prometheus: 2.48

Object Storage Provider: AWS S3

What happened:

Using Thanos Store with Time Filter from time to time I'm it's crashing with the following message.

panic: runtime error: slice bounds out of range [:16000] with capacity 1282

What you expected to happen:

Not crash

How to reproduce it (as minimally and precisely as possible):

Actually!
I have no idea, this is only happening in one shard.

Full logs to relevant components:

panic: runtime error: slice bounds out of range [:16000] with capacity 1290goroutine 507203 [running]:github.com/thanos-io/thanos/pkg/store.(*bucketChunkReader).loadChunks(0x400261e720, {0x26b43f8, 0x407d9c7540}, {0x406284b000, 0x10c, 0x400195e870?}, {0x4102db7748, 0x1, 0x2}, 0x3968440?, ...)	/app/pkg/store/bucket.go:3342 +0xe1cgithub.com/thanos-io/thanos/pkg/store.(*bucketChunkReader).load.func4()	/app/pkg/store/bucket.go:3270 +0x74golang.org/x/sync/errgroup.(*Group).Go.func1()	/go/pkg/mod/golang.org/x/sync@v0.3.0/errgroup/errgroup.go:75 +0x58created by golang.org/x/sync/errgroup.(*Group).Go in goroutine 505525	/go/pkg/mod/golang.org/x/sync@v0.3.0/errgroup/errgroup.go:72 +0x98

Anything else we need to know:

@nicolastakashi nicolastakashi changed the title Thanos Store Panicing Thanos Store Panic Nov 27, 2023
@yeya24
Copy link
Contributor

yeya24 commented Nov 27, 2023

@nicolastakashi Can you share the actual panic stacktrace? I think I saw something similar last week

@MichaHoffmann
Copy link
Contributor

MichaHoffmann commented Nov 27, 2023

We have:

	bufPooled, err := r.block.chunkPool.Get(r.block.estimatedMaxChunkSize)
	if err == nil {
		buf = *bufPooled
	} else {
		buf = make([]byte, r.block.estimatedMaxChunkSize)
	}

and the crash happens here few lines later

		chunkLen = r.block.estimatedMaxChunkSize
		if i+1 < len(pIdxs) {
			if diff = pIdxs[i+1].offset - pIdx.offset; int(diff) < chunkLen {
				chunkLen = int(diff)
			}
		}
->	        cb := buf[:chunkLen]

@MichaHoffmann
Copy link
Contributor

Is the pool returning too small size slices?

@nicolastakashi
Copy link
Contributor Author

I rolled back to 0.31 and seems it's not happening on this version

@jon-rei
Copy link

jon-rei commented Nov 28, 2023

This is also happening to us:
Thanos: v0.32.5 (deployed through bitnami helm chart)
Prometheus: v2.45.1

The problem only occurs on queries which are a little bit older and only rely on the store s3 data.

panic: runtime error: slice bounds out of range [:16000] with capacity 2422
goroutine 2641055 [running]:
github.com/thanos-io/thanos/pkg/store.(*bucketChunkReader).loadChunks(0xc24d3caae0, {0x2d0fc20, 0xc338675680}, {0xc362b24000, 0x9, 0x0?}, {0xc26a0f8f80, 0x1, 0x2}, 0x1, ...)
/bitnami/blacksmith-sandox/thanos-0.32.5/src/github.com/thanos-io/thanos/pkg/store/bucket.go:3342 +0x11e5
github.com/thanos-io/thanos/pkg/store.(*bucketChunkReader).load.func4()
/bitnami/blacksmith-sandox/thanos-0.32.5/src/github.com/thanos-io/thanos/pkg/store/bucket.go:3270 +0xff
golang.org/x/sync/errgroup.(*Group).Go.func1()
/bitnami/blacksmith-sandox/thanos-0.32.5/pkg/mod/golang.org/x/sync@v0.3.0/errgroup/errgroup.go:75 +0x64
created by golang.org/x/sync/errgroup.(*Group).Go
/bitnami/blacksmith-sandox/thanos-0.32.5/pkg/mod/golang.org/x/sync@v0.3.0/errgroup/errgroup.go:72 +0xa5

I can also provide traces if that would be helpful.

@MichaHoffmann
Copy link
Contributor

how is storage gateway configured?

@nicolastakashi
Copy link
Contributor Author

@MichaHoffmann This is mine.

- store
- '--log.level=warn'
- '--log.format=json'
- '--grpc-address=0.0.0.0:10901'
- '--http-address=0.0.0.0:10902'
- '--data-dir=/data'
- '--objstore.config-file=/conf/objstore.yml'
- '--index-cache.config-file=/conf/index-cache.yml'
- '--max-time=-48h'
- '--min-time=-720h'
- '--grpc-grace-period=5s'
- '--store.enable-index-header-lazy-reader'

@jon-rei
Copy link

jon-rei commented Nov 28, 2023

This is our config:

- store
- '--log.level=info'
- '--log.format=logfmt'
- '--grpc-address=0.0.0.0:10901'
- '--http-address=0.0.0.0:10902'
- '--data-dir=/data'
- '--objstore.config-file=/conf/objstore.yml'
- |
  --tracing.config=type: OTLP
  config:
    client_type: grpc
    service_name: "thanos-storegateway"
    endpoint: 127.0.0.1:4317
    insecure: true
    compression: gzip

@pauloconnor
Copy link

pauloconnor commented Mar 7, 2024

We're also seeing this issue on one of our storagegateways sets, out of 6

Prometheus: v2.49.1
Thanos sidecar: v0.28.1
Thanos: v0.34.0

panic: runtime error: slice bounds out of range [:1027] with capacity 1024

goroutine 303830 [running]:
github.com/thanos-io/thanos/pkg/store.(*bucketChunkReader).loadChunks(0xc06bb0d980, {0x35ee848, 0xc1421085a0}, {0xc1d4e00000, 0x1849, 0x100000001?}, {0xc07006e2f8, 0x1, 0x2}, 0x2, ...)
	/bitnami/blacksmith-sandox/thanos-0.34.0/src/github.com/thanos-io/thanos/pkg/store/bucket.go:3532 +0x1118
github.com/thanos-io/thanos/pkg/store.(*bucketChunkReader).load.func4()
	/bitnami/blacksmith-sandox/thanos-0.34.0/src/github.com/thanos-io/thanos/pkg/store/bucket.go:3466 +0x11b
golang.org/x/sync/errgroup.(*Group).Go.func1()
	/bitnami/blacksmith-sandox/thanos-0.34.0/pkg/mod/golang.org/x/sync@v0.5.0/errgroup/errgroup.go:75 +0x56
created by golang.org/x/sync/errgroup.(*Group).Go in goroutine 295848
	/bitnami/blacksmith-sandox/thanos-0.34.0/pkg/mod/golang.org/x/sync@v0.5.0/errgroup/errgroup.go:72 +0x96
- name: storegateway
          image: docker.io/bitnami/thanos:0.34.0-debian-11-r0
          args:
            - store
            - '--log.level=info'
            - '--log.format=logfmt'
            - '--grpc-address=0.0.0.0:10901'
            - '--http-address=0.0.0.0:10902'
            - '--data-dir=/data'
            - '--objstore.config-file=/conf/objstore.yml'

@nicolastakashi
Copy link
Contributor Author

Still seeing this issue on 0.33.0
panic: runtime error: slice bounds out of range [:16000] with capacity 1249goroutine 15326510 [running]:github.com/thanos-io/thanos/pkg/store.(*bucketChunkReader).loadChunks(0x40032faae0, {0x26c6c28, 0x40248fcf00}, {0x4023fe8000, 0x94, 0x2bf1bc?}, {0x41ced43ba8, 0x1, 0x2}, 0x3997b40?, ...) /app/pkg/store/bucket.go:3504 +0xd18github.com/thanos-io/thanos/pkg/store.(*bucketChunkReader).load.func4() /app/pkg/store/bucket.go:3438 +0x84golang.org/x/sync/errgroup.(*Group).Go.func1() /go/pkg/mod/golang.org/x/sync@v0.3.0/errgroup/errgroup.go:75 +0x58created by golang.org/x/sync/errgroup.(*Group).Go in goroutine 15326102 /go/pkg/mod/golang.org/x/sync@v0.3.0/errgroup/errgroup.go:72 +0x98

@martinfrycs1
Copy link

martinfrycs1 commented Mar 11, 2024

So far we fixed it ourself in pkg.store, line 3532:

		chunkLen = r.block.estimatedMaxChunkSize
		if i+1 < len(pIdxs) {
			if diff = pIdxs[i+1].offset - pIdx.offset; int(diff) < chunkLen {
				chunkLen = int(diff)
			}
		}

		// Fix: If we are about to read a chunk that is bigger than the buffer capacity,
		// we need to make sure we have enough space in the buffer.
		if cap(buf) < chunkLen {
			// Put the current buffer back to the pool.
			r.block.chunkPool.Put(&buf)

			// Get a new, bigger, buffer from the pool.
			bufPooled, err = r.block.chunkPool.Get(chunkLen)
			if err == nil {
				buf = *bufPooled
			} else {
				buf = make([]byte, chunkLen)
			}
		}
		// Fix: end

		cb := buf[:chunkLen]
		n, err = io.ReadFull(bufReader, cb)

From what I was able to debug, source of the problem is chunk size defined in the block meta json. For some reason it states smaller size than reality is, requested buffer is too small and store panics.
I guess source of this will be compactor or receiver. We have also "historic stores" serving data created by Thanos v30 and older, there are no such panics, so I assume its combination of the store optimization together with something in receiver or compactor code not filing these values correctly in latest versions.

This will not fix source of this issue but at least fix store code which doesn't correctly check inputs.

@dominicqi
Copy link
Contributor

We're also seeing this issue on Thanos: v0.35.1 .

@yeya24
Copy link
Contributor

yeya24 commented Aug 18, 2024

@martinfrycs1 Thanks, I think your fix makes sense to me. Would you like to create a PR?
The estimatedMaxChunkSize set in block metadata could indeed be smaller than the actual chunk length. The estimated chunk size has quite some constraints and skips some cases. https://github.com/thanos-io/thanos/blob/main/pkg/block/index.go#L322

@martinfrycs1
Copy link

@martinfrycs1

ok, it make sense to implement it then in this way. I didn't have much time to explore source of these data more closely. I will prepare the PR with this fix.

@dominicqi
Copy link
Contributor

I think this should be fixed in

func (p *BucketedBytes) Get(sz int) (*[]byte, error) {

Pool.Get should not return a buffer smaller than the requested size. I haven't yet figured out the source that Put the wrong buffer size.

// Get returns a new byte slice that fits the given size.
func (p *BucketedBytes) Get(sz int) (*[]byte, error) {
	p.mtx.Lock()
	defer p.mtx.Unlock()

	if p.maxTotal > 0 && p.usedTotal+uint64(sz) > p.maxTotal {
		return nil, ErrPoolExhausted
	}

	for i, bktSize := range p.sizes {
		if sz > bktSize {
			continue
		}
		b, ok := p.buckets[i].Get().(*[]byte)
		if !ok || /*check buffer cap from pool*/ cap(*b) < sz {
			b = p.new(bktSize)
		}

		p.usedTotal += uint64(cap(*b))
		return b, nil
	}

	// The requested size exceeds that of our highest bucket, allocate it directly.
	p.usedTotal += uint64(sz)
	return p.new(sz), nil
}

@martinfrycs1
Copy link

Its not problem of the pool not working correctly, but thanos code, asking for small buffer.

This whole code is in the for loop. Previously buffers with fixed size 16000 were requested, around version .31? code was changed to ask for r.block.estimatedMaxChunkSize instead which varies and sometimes contain incorrect value.

Code I provided above just make sure, that it doesn't blindly trust input value, but instead check if big enough buffer was actually requested before so "buffer overflow" don't happen. If not, buffer is returned to the pool and bigger one is requested.

We are running patched versions of Thanos with this change and we did not hit this panic again.

@dominicqi
Copy link
Contributor

thanos/pkg/pool/pool.go

But if the pool has putted buffer that cap does not equal bucket size, the returned buffer cap could still be less than the requested。

@martinfrycs1
Copy link

From what I understand from the pool code, its able to hold variable sized buffers.

In this panic (panic: runtime error: slice bounds out of range [:1027] with capacity 1024),
before the for loop it asks for the buffer with 1024 capacity, but later inside the for loop it try to copy 1027 bytes long data into it.

Fix I posted above is just checking if already obtained buffer is big enough and if not, its switched for the bigger one.

@dominicqi
Copy link
Contributor

dominicqi commented Aug 22, 2024

From what I understand from the pool code, its able to hold variable sized buffers.

In this panic (panic: runtime error: slice bounds out of range [:1027] with capacity 1024), before the for loop it asks for the buffer with 1024 capacity, but later inside the for loop it try to copy 1027 bytes long data into it.

Fix I posted above is just checking if already obtained buffer is big enough and if not, its switched for the bigger one.

Maybe I'm wrong.
But in the fix, check then get from the pool again, and could also get a smaller buffer than the ask size, if the chunk pool still has smaller buffers.

@yeya24
Copy link
Contributor

yeya24 commented Aug 27, 2024

@dominicqi Thanks, this seems a legit bug and we need to fix it.

But the issue found by @martinfrycs1 is also valid and we need to fix the problem of actual chunk size larger than the asked buffer size.

They are separate issues I think and we can fix them separately.

@skhilar
Copy link

skhilar commented Sep 23, 2024

The requested size check is there; how it is returning a smaller slice is strange to me.

func (p *BucketedBytes) Get(sz int) (*[]byte, error) {
	p.mtx.Lock()
	defer p.mtx.Unlock()

	if p.maxTotal > 0 && p.usedTotal+uint64(sz) > p.maxTotal {
		return nil, ErrPoolExhausted
	}

	for i, bktSize := range p.sizes {
		if sz > bktSize {
			continue                                  **<=== it is making sure, it is getting slice with requested size**
		}
		b, ok := p.buckets[i].Get().(*[]byte)
		if !ok {
			b = p.new(bktSize)
		}

		p.usedTotal += uint64(cap(*b))
		return b, nil
	}

	// The requested size exceeds that of our highest bucket, allocate it directly.
	p.usedTotal += uint64(sz)
	return p.new(sz), nil
}

There must be another thread returning the slice with a smaller size to the pool.  I feel p.buckets[i].Put(b) should be in lock.

func (p *BucketedBytes) Put(b *[]byte) {
	if b == nil {
		return
	}

	sz := cap(*b)
	**for i, bktSize := range p.sizes {
		if sz > bktSize {
			continue
		}
		*b = (*b)[:0]
		p.buckets[i].Put(b)                                             **<== return to pool code is not in lock**
		break
	}**

	p.mtx.Lock()
	defer p.mtx.Unlock()
	// We could assume here that our users will not make the slices larger
	// but lets be on the safe side to avoid an underflow of p.usedTotal.
	if uint64(sz) >= p.usedTotal {
		p.usedTotal = 0
	} else {
		p.usedTotal -= uint64(sz)
	}
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants