Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add automaxprocs #1430

Merged
merged 1 commit into from
Aug 27, 2019
Merged

Add automaxprocs #1430

merged 1 commit into from
Aug 27, 2019

Conversation

2nick
Copy link
Contributor

@2nick 2nick commented Aug 16, 2019

Changes

Add uber-go/automaxprocs integration to automatically set maxprocs value according to cgroup limit or $GOMAXPROCS env variable

Verification

Run any thanos component with --log.level=debug in docker container with cpus limit or not empty env variable GOMAXPROCS and see debug log message from library which value has been set for golang maxprocs.

Copy link
Member

@bwplotka bwplotka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! I think it makes sense but needed to research on my own WHY so some comment what are the rationales would be useful.

Let me know if I am wrong here, but the reason is that:

GOMAXPROCS is set properly in non-containerized guest thanks to:

As of Go 1.5, the default value of GOMAXPROCS is the number of CPUs (whatever your operating system considers to be a CPU) visible to the program at startup.

note: the number of operating system threads in use by a Go program includes threads servicing cgo calls, thread blocked on operating system calls, and may be larger than the value of GOMAXPROCS.

However in cgroup based container, despite cgroup limit/quotas container still see all the host CPUs, right?

level.Debug(logger).Log("msg", fmt.Sprintf(template, args))
}

undo, err := maxprocs.Set(maxprocs.Logger(loggerAdapter))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we comment on what are rationales? Why we are doing this?

There is almost 0 docs on https://github.com/uber-go/automaxprocs

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@2nick let's add more comments here, pls

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bwplotka yep, you're right, app running in container will see all CPUs on host. maxprocs.Set() calculates GOMAXPROCS more accurate because also uses cgroup limits.
It's pretty convenient to configure CPU limits in one place when you're setting them for container and don't mind about GOMAXPROCS at all :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added comment and force-pushed amended commit - will github switch to the new commit automatically or I need to make some additional actions? First time merge requesting so sorry for newbie questions. :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's ok, GitHub will take care of this automatically 👍

Signed-off-by: Alexander Tunik <2braven@gmail.com>
Copy link
Member

@GiedriusS GiedriusS left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM and the comments explain everything well now 👍 I will let @bwplotka double check this before merging, though, as he had comments.

@bwplotka
Copy link
Member

Changelog item is missing, but we can add it later (before tomorrow's release)

@bwplotka bwplotka merged commit 5e66d9d into thanos-io:master Aug 27, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants