Skip to content

Multi-database support #238

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
May 21, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
138 changes: 134 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -601,12 +601,15 @@ You may provide the connection details using these variables:

The following example puts the logfile in the current location with the filename `alert.log` and loads the default matrics file (`default-metrics,toml`) from the current location.

If you prefer to provide configuration via a [config file](./example-config.yaml), you may do so with the `--config.file` argument. The use of a config file over command line arguments is preferred. If a config file is not provided, the default database connection is managed by command line arguments.
If you prefer to provide configuration via a [config file](./example-config.yaml), you may do so with the `--config.file` argument. The use of a config file over command line arguments is preferred. If a config file is not provided, the "default" database connection is managed by command line arguments.

```yaml
# Example Oracle Database Metrics Exporter Configuration file.
# Environment variables of the form ${VAR_NAME} will be expanded.

# Example Oracle Database Metrics Exporter Configuration file.
# Environment variables of the form ${VAR_NAME} will be expanded.

databases:
## Path on which metrics will be served
# metricsPath: /metrics
Expand All @@ -618,8 +621,7 @@ databases:
password: ${DB_PASSWORD}
## Database connection url
url: localhost:1521/freepdb1
## Metrics scrape interval for this database
scrapeInterval: 15s

## Metrics query timeout for this database, in seconds
queryTimeout: 5

Expand Down Expand Up @@ -650,8 +652,11 @@ databases:
# poolMinConnections: 15

metrics:
## How often to scrape metrics. If not provided, metrics will be scraped on request.
# scrapeInterval: 15s
## Path to default metrics file.
default: default-metrics.toml
#
## Paths to any custom metrics files
custom:
- custom-metrics-example/custom-metrics.toml

Expand All @@ -664,10 +669,135 @@ log:
# disable: 0
```

### Scraping multiple databases

You may scrape as many databases as needed by defining named database configurations in the config file. The following configuration defines two databases, "db1", and "db2" for the metrics exporter.

```yaml
# Example Oracle Database Metrics Exporter Configuration file.
# Environment variables of the form ${VAR_NAME} will be expanded.

databases:
## Path on which metrics will be served
# metricsPath: /metrics

## As many named database configurations may be defined as needed.
## It is recommended to define your database config in the config file, rather than using CLI arguments.

## Database connection information for the "db1" database.
db1:
## Database username
username: ${DB1_USERNAME}
## Database password
password: ${DB1_PASSWORD}
## Database connection url
url: localhost:1521/freepdb1

## Metrics query timeout for this database, in seconds
queryTimeout: 5

## Rely on Oracle Database External Authentication by network or OS
# externalAuth: false
## Database role
# role: SYSDBA
## Path to Oracle Database wallet, if using wallet
# tnsAdmin: /path/to/database/wallet

### Connection settings:
### Either the go-sql or Oracle Database connection pool may be used.
### To use the Oracle Database connection pool over the go-sql connection pool,
### set maxIdleConns to zero and configure the pool* settings.

### Connection pooling settings for the go-sql connection pool
## Max open connections for this database using go-sql connection pool
maxOpenConns: 10
## Max idle connections for this database using go-sql connection pool
maxIdleConns: 10

### Connection pooling settings for the Oracle Database connection pool
## Oracle Database connection pool increment.
# poolIncrement: 1
## Oracle Database Connection pool maximum size
# poolMaxConnections: 15
## Oracle Database Connection pool minimum size
# poolMinConnections: 15
db2:
## Database username
username: ${DB2_USERNAME}
## Database password
password: ${DB2_PASSWORD}
## Database connection url
url: localhost:1522/freepdb1

## Metrics query timeout for this database, in seconds
queryTimeout: 5

## Rely on Oracle Database External Authentication by network or OS
# externalAuth: false
## Database role
# role: SYSDBA
## Path to Oracle Database wallet, if using wallet
# tnsAdmin: /path/to/database/wallet

### Connection settings:
### Either the go-sql or Oracle Database connection pool may be used.
### To use the Oracle Database connection pool over the go-sql connection pool,
### set maxIdleConns to zero and configure the pool* settings.

### Connection pooling settings for the go-sql connection pool
## Max open connections for this database using go-sql connection pool
maxOpenConns: 10
## Max idle connections for this database using go-sql connection pool
maxIdleConns: 10

### Connection pooling settings for the Oracle Database connection pool
## Oracle Database connection pool increment.
# poolIncrement: 1
## Oracle Database Connection pool maximum size
# poolMaxConnections: 15
## Oracle Database Connection pool minimum size
# poolMinConnections: 15

metrics:
## How often to scrape metrics. If not provided, metrics will be scraped on request.
# scrapeInterval: 15s
## Path to default metrics file.
default: default-metrics.toml
## Paths to any custom metrics files
custom:
- custom-metrics-example/custom-metrics.toml

log:
# Path of log file
destination: /opt/alert.log
# Interval of log updates
interval: 15s
## Set disable to 1 to disable logging
# disable: 0
```


```shell
./oracledb_exporter --log.destination="./alert.log" --default.metrics="./default-metrics.toml"
```

#### Scraping metrics from specific databases

By default, metrics are scraped from every connected database. To expose only certain metrics on specific databases, configure the `databases` property of a metric. The following metric definition will only be scraped from databases "db2" and "db3":

```toml
[[metric]]
context = "db_platform"
labels = [ "platform_name" ]
metricsdesc = { value = "Database platform" }
request = '''
SELECT platform_name, 1 as value FROM v$database
'''
databases = [ "db2", "db3" ]
```

If the `databases` array is empty or not provided for a metric, that metric will be scraped from all connected databases.

### Using OCI Vault

The exporter will read the password from a secret stored in OCI Vault if you set these two environment variables:
Expand Down
31 changes: 16 additions & 15 deletions alertlog/alertlog.go
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,10 @@
package alertlog

import (
"database/sql"
"encoding/json"
"errors"
"fmt"
"github.com/oracle/oracle-db-appdev-monitoring/collector"
"io"
"log/slog"
"os"
Expand All @@ -21,13 +21,14 @@ type LogRecord struct {
Message string `json:"message"`
}

var queryFailures int = 0
var databaseFailures map[string]int = map[string]int{}

func UpdateLog(logDestination string, logger *slog.Logger, db *sql.DB) {
func UpdateLog(logDestination string, logger *slog.Logger, d *collector.Database) {

queryFailures := databaseFailures[d.Name]
if queryFailures == 3 {
logger.Info("Failed to query the alert log three consecutive times, so will not try any more")
queryFailures++
logger.Info("Failed to query the alert log three consecutive times, so will not try any more", "database", d.Name)
databaseFailures[d.Name]++
return
}

Expand All @@ -37,10 +38,10 @@ func UpdateLog(logDestination string, logger *slog.Logger, db *sql.DB) {

// check if the log file exists, and if not, create it
if _, err := os.Stat(logDestination); errors.Is(err, os.ErrNotExist) {
logger.Info("Log destination file does not exist, will try to create it: " + logDestination)
logger.Info("Log destination file does not exist, will try to create it: "+logDestination, "database", d.Name)
f, e := os.Create(logDestination)
if e != nil {
logger.Error("Failed to create the log file: " + logDestination)
logger.Error("Failed to create the log file: "+logDestination, "database", d.Name)
return
}
f.Close()
Expand All @@ -50,7 +51,7 @@ func UpdateLog(logDestination string, logger *slog.Logger, db *sql.DB) {
file, err := os.Open(logDestination)

if err != nil {
logger.Error("Could not open the alert log destination file: " + logDestination)
logger.Error("Could not open the alert log destination file: "+logDestination, "database", d.Name)
return
}

Expand Down Expand Up @@ -108,9 +109,9 @@ func UpdateLog(logDestination string, logger *slog.Logger, db *sql.DB) {
from v$diag_alert_ext
where originating_timestamp > to_utc_timestamp_tz('%s')`, lastLogRecord.Timestamp)

rows, err := db.Query(stmt)
rows, err := d.Session.Query(stmt)
if err != nil {
logger.Error("Error querying the alert logs")
logger.Error("Error querying the alert logs", "database", d.Name)
queryFailures++
return
}
Expand All @@ -119,7 +120,7 @@ func UpdateLog(logDestination string, logger *slog.Logger, db *sql.DB) {
// write them to the file
outfile, err := os.OpenFile(logDestination, os.O_APPEND|os.O_WRONLY, 0600)
if err != nil {
logger.Error("Could not open log file for writing: " + logDestination)
logger.Error("Could not open log file for writing: "+logDestination, "database", d.Name)
return
}
defer outfile.Close()
Expand All @@ -128,7 +129,7 @@ func UpdateLog(logDestination string, logger *slog.Logger, db *sql.DB) {
for rows.Next() {
var newRecord LogRecord
if err := rows.Scan(&newRecord.Timestamp, &newRecord.ModuleId, &newRecord.ECID, &newRecord.Message); err != nil {
logger.Error("Error reading a row from the alert logs")
logger.Error("Error reading a row from the alert logs", "database", d.Name)
return
}

Expand All @@ -137,18 +138,18 @@ func UpdateLog(logDestination string, logger *slog.Logger, db *sql.DB) {

jsonLogRecord, err := json.Marshal(newRecord)
if err != nil {
logger.Error("Error marshalling alert log record")
logger.Error("Error marshalling alert log record", "database", d.Name)
return
}

if _, err = outfile.WriteString(string(jsonLogRecord) + "\n"); err != nil {
logger.Error("Could not write to log file: " + logDestination)
logger.Error("Could not write to log file: "+logDestination, "database", d.Name)
return
}
}

if err = rows.Err(); err != nil {
logger.Error("Error querying the alert logs")
logger.Error("Error querying the alert logs", "database", d.Name)
queryFailures++
}
}
Loading