Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[receiver/hostmetrics] Deprecate processesscraper #30894

Closed
27 changes: 27 additions & 0 deletions .chloggen/system_processes_metrics_to_process_scraper.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
# Use this changelog template to create an entry for release notes.

# One of 'breaking', 'deprecation', 'new_component', 'enhancement', 'bug_fix'
change_type: enhancement

# The name of the component, or a single word describing the area of concern, (e.g. filelogreceiver)
component: hostmetricsreceiver

# A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`).
note: Adds the system.processes.* metrics to the process scraper. Also adds a deprecation warning to the processesscraper.

# Mandatory: One or more tracking issues related to the change. You can use the PR number here if no issue exists.
issues: [30895]

# (Optional) One or more lines of additional information to render under the primary note.
# These lines will be padded with 2 spaces and then inserted directly into the document.
# Use pipe (|) for multiline entries.
subtext:

# If your change doesn't affect end users or the exported elements of any package,
# you should instead start your pull request title with [chore] or use the "Skip Changelog" label.
# Optional: The change log or logs in which this entry should be included.
# e.g. '[user]' or '[user, api]'
# Include 'user' if the change is relevant to end users.
# Include 'api' if there is a change to a library API.
# Default: '[user]'
change_logs: []
2 changes: 2 additions & 0 deletions cmd/mdatagen/validate_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -111,6 +111,8 @@ func TestValidateMetricDuplicates(t *testing.T) {
"container.cpu.utilization": {"docker_stats", "kubeletstats"},
"container.memory.rss": {"docker_stats", "kubeletstats"},
"container.uptime": {"docker_stats", "kubeletstats"},
"system.processes.created": {"hostmetricsreceiver/process", "hostmetricsreceiver/processes"},
"system.processes.count": {"hostmetricsreceiver/process", "hostmetricsreceiver/processes"},
}
allMetrics := map[string][]string{}
err := filepath.Walk("../../receiver", func(path string, info fs.FileInfo, err error) error {
Expand Down
25 changes: 20 additions & 5 deletions receiver/hostmetricsreceiver/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,8 +47,8 @@ The available scrapers are:
| [memory] | All | Memory utilization metrics |
| [network] | All | Network interface I/O metrics & TCP connection metrics |
| [paging] | All | Paging/Swap space utilization and I/O metrics |
| [processes] | Linux, Mac | Process count metrics |
| [process] | Linux, Windows, Mac | Per process CPU, Memory, and Disk I/O metrics |
| [processes] | Linux, Mac | DEPRECATED: Use `process` scraper |
| [process] | Linux, Windows, Mac | Per process CPU, Memory, and Disk I/O metrics, full system process counts |

[cpu]: ./internal/scraper/cpuscraper/documentation.md
[disk]: ./internal/scraper/diskscraper/documentation.md
Expand Down Expand Up @@ -160,14 +160,14 @@ service:

Host metrics are collected from the Linux system directories on the filesystem.
You likely want to collect metrics about the host system and not the container.
This is achievable by following these steps:
This is achievable by following these steps:

#### 1. Bind mount the host filesystem

The simplest configuration is to mount the entire host filesystem when running
The simplest configuration is to mount the entire host filesystem when running
the container. e.g. `docker run -v /:/hostfs ...`.

You can also choose which parts of the host filesystem to mount, if you know
You can also choose which parts of the host filesystem to mount, if you know
exactly what you'll need. e.g. `docker run -v /proc:/hostfs/proc`.

#### 2. Configure `root_path`
Expand All @@ -191,3 +191,18 @@ Currently, the hostmetrics receiver does not set any Resource attributes on the
export OTEL_RESOURCE_ATTRIBUTES="service.name=<the name of your service>,service.namespace=<the namespace of your service>,service.instance.id=<uuid of the instance>"
```

## Processes scraper deprecation

The `processes` scraper has been deprecated in favor of the `process` scraper. The `processes` scraper will be removed in a future release. To enable the same functionality, remove the `processes` scraper and enable the `system.processes.*` metrics in the `process` scraper:

```yaml
receivers:
hostmetrics:
scrapers:
process:
metrics:
system.processes.created:
dmitryax marked this conversation as resolved.
Show resolved Hide resolved
enabled: true
system.processes.count:
enabled: true
```
119 changes: 118 additions & 1 deletion receiver/hostmetricsreceiver/hostmetrics_receiver_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -210,6 +210,92 @@ func appendMapInto(m1 map[string]struct{}, m2 map[string]struct{}) {
}
}

func Test_ProcessesScraperDeprecationCompatibility(t *testing.T) {
if runtime.GOOS != "linux" {
t.Skip("Skipping test on non-Linux platform")
}
processesConfig := &Config{
ScraperControllerSettings: scraperhelper.ScraperControllerSettings{
CollectionInterval: 100 * time.Millisecond,
},
Scrapers: map[string]internal.Config{
processscraper.TypeStr: scraperFactories[processscraper.TypeStr].CreateDefaultConfig(),
processesscraper.TypeStr: scraperFactories[processesscraper.TypeStr].CreateDefaultConfig(),
},
}
processConfig := &Config{
ScraperControllerSettings: scraperhelper.ScraperControllerSettings{
CollectionInterval: 100 * time.Millisecond,
},
Scrapers: map[string]internal.Config{
processscraper.TypeStr: (&processscraper.Factory{}).CreateDefaultConfigWithSystemProcessesEnabled(),
},
}

processesSink := runScrapeForConfig(t, processesConfig)
processSink := runScrapeForConfig(t, processConfig)
assertProcessMetricShape(t, processesSink)
assertProcessMetricShape(t, processSink)
}

func runScrapeForConfig(t *testing.T, cfg *Config) *consumertest.MetricsSink {
t.Helper()

scraperFactories = factories
sink := new(consumertest.MetricsSink)

receiver, err := NewFactory().CreateMetricsReceiver(context.Background(), creationSet, cfg, sink)
require.NoError(t, err, "Failed to create metrics receiver: %v", err)

ctx, cancelFn := context.WithCancel(context.Background())
err = receiver.Start(ctx, componenttest.NewNopHost())
require.NoError(t, err, "Failed to start metrics receiver: %v", err)
defer func() { assert.NoError(t, receiver.Shutdown(context.Background())) }()

// canceling the context provided to Start should not cancel any async processes initiated by the receiver
cancelFn()

const tick = 50 * time.Millisecond
const waitFor = 10 * time.Second
require.Eventuallyf(t, func() bool {
return len(sink.AllMetrics()) > 0
}, waitFor, tick, "No metrics were collected after %v", waitFor)

return sink
}

func assertProcessMetricShape(t *testing.T, sink *consumertest.MetricsSink) {
// Whether the metrics came from using the deprecated scraper or just the process scraper
// with the system.processes.* metrics enabled, the result should be the same.
// There will be process resources at the beginning, and at the end should be an empty
// resource with the `system.processes.*` metrics present.

metrics := sink.AllMetrics()[0]

// Check that all resources up until the final one are process resources.
for i := 0; i < metrics.ResourceMetrics().Len()-1; i++ {
_, ok := metrics.ResourceMetrics().At(i).Resource().Attributes().Get("process.pid")
assert.True(t, ok)
}

// Check that the final resource has the system.processes.* metrics.
finalResourceMetrics := metrics.ResourceMetrics().At(metrics.ResourceMetrics().Len() - 1).ScopeMetrics().At(0).Metrics()
found := map[string]bool{
"system.processes.count": false,
"system.processes.created": false,
}
for i := 0; i < finalResourceMetrics.Len(); i++ {
metric := finalResourceMetrics.At(i)
if metric.Name() == "system.processes.count" {
found["system.processes.count"] = true
} else if metric.Name() == "system.processes.created" {
found["system.processes.created"] = true
}
}
assert.True(t, found["system.processes.count"])
assert.True(t, found["system.processes.created"])
}

const mockTypeStr = "mock"

type mockConfig struct{}
Expand Down Expand Up @@ -404,7 +490,7 @@ func Benchmark_ScrapeSystemMetrics(b *testing.B) {
}

func Benchmark_ScrapeSystemAndProcessMetrics(b *testing.B) {
if runtime.GOOS != "linux" && runtime.GOOS != "windows" {
if runtime.GOOS != "linux" {
b.Skip("skipping test on non linux/windows")
}

Expand All @@ -424,3 +510,34 @@ func Benchmark_ScrapeSystemAndProcessMetrics(b *testing.B) {

benchmarkScrapeMetrics(b, cfg)
}

func Benchmark_ScrapeProcessMetricsWithSystemProcessMetrics(b *testing.B) {
if runtime.GOOS != "linux" {
b.Skip("skipping test on non linux")
}

cfg := &Config{
ScraperControllerSettings: scraperhelper.NewDefaultScraperControllerSettings(""),
Scrapers: map[string]internal.Config{
processscraper.TypeStr: (&processscraper.Factory{}).CreateDefaultConfigWithSystemProcessesEnabled(),
},
}

benchmarkScrapeMetrics(b, cfg)
}

func Benchmark_ScrapeProcessMetricsWithDeprecatedProcessesScraper(b *testing.B) {
if runtime.GOOS != "linux" {
b.Skip("skipping test on non linux")
}

cfg := &Config{
ScraperControllerSettings: scraperhelper.NewDefaultScraperControllerSettings(""),
Scrapers: map[string]internal.Config{
processscraper.TypeStr: (&processscraper.Factory{}).CreateDefaultConfig(),
processesscraper.TypeStr: (&processesscraper.Factory{}).CreateDefaultConfig(),
},
}

benchmarkScrapeMetrics(b, cfg)
}
Original file line number Diff line number Diff line change
Expand Up @@ -37,12 +37,23 @@ func (f *Factory) CreateMetricsScraper(
settings receiver.CreateSettings,
config internal.Config,
) (scraperhelper.Scraper, error) {
settings.Logger.Warn(`processes scraping will soon be deprecated, system.processes.created and system.processes.count metrics have been moved to the process scraper.
To enable them, apply the following config:

scrapers:
process:
metrics:
system.processes.created:
enabled: true
system.processes.count:
enabled: true
`)
cfg := config.(*Config)
s := newProcessesScraper(ctx, settings, cfg)
s := NewProcessesScraper(ctx, settings, cfg)

return scraperhelper.NewScraper(
TypeStr,
s.scrape,
scraperhelper.WithStart(s.start),
s.Scrape,
scraperhelper.WithStart(s.Start),
)
}
Original file line number Diff line number Diff line change
Expand Up @@ -31,20 +31,20 @@ var metricsLength = func() int {
return n
}()

// scraper for Processes Metrics
type scraper struct {
// Scraper for Processes Metrics
type Scraper struct {
dmitryax marked this conversation as resolved.
Show resolved Hide resolved
settings receiver.CreateSettings
config *Config
mb *metadata.MetricsBuilder

// for mocking gopsutil
getMiscStats func(context.Context) (*load.MiscStat, error)
getProcesses func() ([]proc, error)
GetMiscStats func(context.Context) (*load.MiscStat, error)
GetProcesses func() ([]Proc, error)
bootTime func(context.Context) (uint64, error)
}

// for mocking out gopsutil process.Process
type proc interface {
type Proc interface {
Status() ([]string, error)
}

Expand All @@ -53,16 +53,16 @@ type processesMetadata struct {
processesCreated *int64 // ignored if enableProcessesCreated is false
}

// newProcessesScraper creates a set of Processes related metrics
func newProcessesScraper(_ context.Context, settings receiver.CreateSettings, cfg *Config) *scraper {
return &scraper{
// NewProcessesScraper creates a set of Processes related metrics
func NewProcessesScraper(_ context.Context, settings receiver.CreateSettings, cfg *Config) *Scraper {
return &Scraper{
settings: settings,
config: cfg,
getMiscStats: load.MiscWithContext,
getProcesses: func() ([]proc, error) {
GetMiscStats: load.MiscWithContext,
GetProcesses: func() ([]Proc, error) {
ctx := context.WithValue(context.Background(), common.EnvKey, cfg.EnvMap)
ps, err := process.ProcessesWithContext(ctx)
ret := make([]proc, len(ps))
ret := make([]Proc, len(ps))
for i := range ps {
ret[i] = ps[i]
}
Expand All @@ -72,7 +72,7 @@ func newProcessesScraper(_ context.Context, settings receiver.CreateSettings, cf
}
}

func (s *scraper) start(ctx context.Context, _ component.Host) error {
func (s *Scraper) Start(ctx context.Context, _ component.Host) error {
ctx = context.WithValue(ctx, common.EnvKey, s.config.EnvMap)
bootTime, err := s.bootTime(ctx)
if err != nil {
Expand All @@ -83,7 +83,7 @@ func (s *scraper) start(ctx context.Context, _ component.Host) error {
return nil
}

func (s *scraper) scrape(_ context.Context) (pmetric.Metrics, error) {
func (s *Scraper) Scrape(_ context.Context) (pmetric.Metrics, error) {
now := pcommon.NewTimestampFromTime(time.Now())

md := pmetric.NewMetrics()
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,6 @@ package processesscraper // import "github.com/open-telemetry/opentelemetry-coll
const enableProcessesCount = false
const enableProcessesCreated = false

func (s *scraper) getProcessesMetadata() (processesMetadata, error) {
func (s *Scraper) getProcessesMetadata() (processesMetadata, error) {
return processesMetadata{}, nil
}
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ func TestScrape(t *testing.T) {
type testCase struct {
name string
getMiscStats func(context.Context) (*load.MiscStat, error)
getProcesses func() ([]proc, error)
getProcesses func() ([]Proc, error)
expectedErr string
validate func(*testing.T, pmetric.MetricSlice)
}
Expand All @@ -44,41 +44,41 @@ func TestScrape(t *testing.T) {
}, {
name: "FakeData",
getMiscStats: func(ctx context.Context) (*load.MiscStat, error) { return &fakeData, nil },
getProcesses: func() ([]proc, error) { return fakeProcessesData, nil },
getProcesses: func() ([]Proc, error) { return fakeProcessesData, nil },
validate: validateFakeData,
}, {
name: "ErrorFromMiscStat",
getMiscStats: func(context.Context) (*load.MiscStat, error) { return &load.MiscStat{}, errors.New("err1") },
expectedErr: "err1",
}, {
name: "ErrorFromProcesses",
getProcesses: func() ([]proc, error) { return nil, errors.New("err2") },
getProcesses: func() ([]Proc, error) { return nil, errors.New("err2") },
expectedErr: "err2",
}, {
name: "ErrorFromProcessShouldBeIgnored",
getProcesses: func() ([]proc, error) { return []proc{errProcess{}}, nil },
getProcesses: func() ([]Proc, error) { return []Proc{errProcess{}}, nil },
}, {
name: "Validate Start Time",
validate: validateStartTime,
}}

for _, test := range testCases {
t.Run(test.name, func(t *testing.T) {
scraper := newProcessesScraper(context.Background(), receivertest.NewNopCreateSettings(), &Config{
scraper := NewProcessesScraper(context.Background(), receivertest.NewNopCreateSettings(), &Config{
MetricsBuilderConfig: metadata.DefaultMetricsBuilderConfig(),
})
err := scraper.start(context.Background(), componenttest.NewNopHost())
err := scraper.Start(context.Background(), componenttest.NewNopHost())
assert.NoError(t, err, "Failed to initialize processes scraper: %v", err)

// Override scraper methods if we are mocking out for this test case
if test.getMiscStats != nil {
scraper.getMiscStats = test.getMiscStats
scraper.GetMiscStats = test.getMiscStats
}
if test.getProcesses != nil {
scraper.getProcesses = test.getProcesses
scraper.GetProcesses = test.getProcesses
}

md, err := scraper.scrape(context.Background())
md, err := scraper.Scrape(context.Background())

expectedMetricCount := 0
if expectProcessesCountMetric {
Expand Down Expand Up @@ -166,7 +166,7 @@ var fakeData = load.MiscStat{
ProcsTotal: 30,
}

var fakeProcessesData = []proc{
var fakeProcessesData = []Proc{
fakeProcess(process.Wait),
fakeProcess(process.Blocked), fakeProcess(process.Blocked),
fakeProcess(process.Running), fakeProcess(process.Running), fakeProcess(process.Running),
Expand Down
Loading
Loading