Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: get official field doc #457

Merged
merged 20 commits into from
May 31, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
20 commits
Select commit Hold shift + click to select a range
ff191cd
fix(deps): update module github.com/aws/aws-sdk-go to v1.44.267 (#451)
renovate[bot] May 23, 2023
a4f2a20
feat: get official field doc
golgoth31 May 23, 2023
e61ddcf
feat: use schema from server
golgoth31 May 26, 2023
5092067
feat: add configuration api route (#459)
matthisholleville May 25, 2023
a9befff
fix(deps): update module github.com/aws/aws-sdk-go to v1.44.269 (#458)
renovate[bot] May 25, 2023
4d99623
fix: updated list.go to handle k8sgpt cache list crashing issue (#455)
krishnaduttPanchagnula May 25, 2023
a8be564
chore(main): release 0.3.5 (#452)
github-actions[bot] May 25, 2023
79164de
chore(deps): update google-github-actions/release-please-action diges…
renovate[bot] May 26, 2023
940b6eb
fix: name of sa reference in deployment (#468)
jkleinlercher May 26, 2023
b291516
fix(deps): update module github.com/aws/aws-sdk-go to v1.44.270 (#465)
renovate[bot] May 26, 2023
1a6f2c9
fix: typo (#463)
rakshitgondwal May 26, 2023
ca21be6
fix(deps): update module github.com/aws/aws-sdk-go to v1.44.271 (#469)
renovate[bot] May 27, 2023
bc886ee
fix(deps): update module github.com/aws/aws-sdk-go to v1.44.269 (#458)
renovate[bot] May 25, 2023
0470137
fix(deps): update module github.com/aws/aws-sdk-go to v1.44.270 (#465)
renovate[bot] May 26, 2023
de6e55c
fix(deps): update module github.com/aws/aws-sdk-go to v1.44.271 (#469)
renovate[bot] May 27, 2023
5c2c0db
feat: Add with-doc flag to enable/disable kubernetes doc
golgoth31 May 28, 2023
d39770b
Merge branch 'main' into main
golgoth31 May 28, 2023
7813a4e
use fmt.Sprintf in apireference.go
golgoth31 May 28, 2023
9381b9e
add --with-doc to readme
golgoth31 May 28, 2023
7f42e9d
Merge branch 'main' into main
AlexsJones May 31, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 11 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -128,6 +128,7 @@ _This mode of operation is ideal for continuous monitoring of your cluster and c
* Run `k8sgpt filters` to manage the active filters used by the analyzer. By default, all filters are executed during analysis.
* Run `k8sgpt analyze` to run a scan.
* And use `k8sgpt analyze --explain` to get a more detailed explanation of the issues.
* You also run `k8sgpt analyze --with-doc` (with or without the explain flag) to get the official documention from kubernetes.

## Analyzers

Expand Down Expand Up @@ -163,6 +164,7 @@ _Run a scan with the default analyzers_
k8sgpt generate
k8sgpt auth add
k8sgpt analyze --explain
k8sgpt analyze --explain --with-doc
```

_Filter on resource_
Expand Down Expand Up @@ -279,7 +281,7 @@ curl -X GET "http://localhost:8080/analyze?namespace=k8sgpt&explain=false"
<details>
<summary> LocalAI provider </summary>

To run local models, it is possible to use OpenAI compatible APIs, for instance [LocalAI](https://github.com/go-skynet/LocalAI) which uses [llama.cpp](https://github.com/ggerganov/llama.cpp) and [ggml](https://github.com/ggerganov/ggml) to run inference on consumer-grade hardware. Models supported by LocalAI for instance are Vicuna, Alpaca, LLaMA, Cerebras, GPT4ALL, GPT4ALL-J and koala.
To run local models, it is possible to use OpenAI compatible APIs, for instance [LocalAI](https://github.com/go-skynet/LocalAI) which uses [llama.cpp](https://github.com/ggerganov/llama.cpp) and [ggml](https://github.com/ggerganov/ggml) to run inference on consumer-grade hardware. Models supported by LocalAI for instance are Vicuna, Alpaca, LLaMA, Cerebras, GPT4ALL, GPT4ALL-J and koala.


To run local inference, you need to download the models first, for instance you can find `ggml` compatible models in [huggingface.com](https://huggingface.co/models?search=ggml) (for example vicuna, alpaca and koala).
Expand Down Expand Up @@ -309,16 +311,16 @@ k8sgpt analyze --explain --backend localai

<em>Prerequisites:</em> an Azure OpenAI deployment is needed, please visit MS official [documentation](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource) to create your own.

To authenticate with k8sgpt, you will need the Azure OpenAI endpoint of your tenant `"https://your Azure OpenAI Endpoint"`, the api key to access your deployment, the deployment name of your model and the model name itself.
To authenticate with k8sgpt, you will need the Azure OpenAI endpoint of your tenant `"https://your Azure OpenAI Endpoint"`, the api key to access your deployment, the deployment name of your model and the model name itself.


To run k8sgpt, run `k8sgpt auth` with the `azureopenai` backend:
To run k8sgpt, run `k8sgpt auth` with the `azureopenai` backend:
```
k8sgpt auth add --backend azureopenai --baseurl https://<your Azure OpenAI endpoint> --engine <deployment_name> --model <model_name>
```
Lastly, enter your Azure API key, after the prompt.

Now you are ready to analyze with the azure openai backend:
Now you are ready to analyze with the azure openai backend:
```
k8sgpt analyze --explain --backend azureopenai
```
Expand Down Expand Up @@ -395,31 +397,31 @@ The Kubernetes system is trying to scale a StatefulSet named fake-deployment usi

Config file locations:
| OS | Path |
|---------|--------------------------------------------------|
| ------- | ------------------------------------------------ |
| MacOS | ~/Library/Application Support/k8sgpt/k8sgpt.yaml |
| Linux | ~/.config/k8sgpt/k8sgpt.yaml |
| Windows | %LOCALAPPDATA%/k8sgpt/k8sgpt.yaml |
</details>

<details>
There may be scenarios where caching remotely is prefered.
There may be scenarios where caching remotely is prefered.
In these scenarios K8sGPT supports AWS S3 Integration.

<summary> Remote caching </summary>

_As a prerequisite `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` are required as environmental variables._

_Adding a remote cache_
Note: this will create the bucket if it does not exist
```
k8sgpt cache add --region <aws region> --bucket <name>
```

_Listing cache items_
```
k8sgpt cache list
```

_Removing the remote cache_
Note: this will not delete the bucket
```
Expand Down
5 changes: 4 additions & 1 deletion cmd/analyze/analyze.go
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@ var (
namespace string
anonymize bool
maxConcurrency int
withDoc bool
)

// AnalyzeCmd represents the problems command
Expand All @@ -45,7 +46,7 @@ var AnalyzeCmd = &cobra.Command{

// AnalysisResult configuration
config, err := analysis.NewAnalysis(backend,
language, filters, namespace, nocache, explain, maxConcurrency)
language, filters, namespace, nocache, explain, maxConcurrency, withDoc)
if err != nil {
color.Red("Error: %v", err)
os.Exit(1)
Expand Down Expand Up @@ -91,4 +92,6 @@ func init() {
AnalyzeCmd.Flags().StringVarP(&language, "language", "l", "english", "Languages to use for AI (e.g. 'English', 'Spanish', 'French', 'German', 'Italian', 'Portuguese', 'Dutch', 'Russian', 'Chinese', 'Japanese', 'Korean')")
// add max concurrency
AnalyzeCmd.Flags().IntVarP(&maxConcurrency, "max-concurrency", "m", 10, "Maximum number of concurrent requests to the Kubernetes API server")
// kubernetes doc flag
AnalyzeCmd.Flags().BoolVarP(&withDoc, "with-doc", "d", false, "Give me the official documentation of the involved field")
}
2 changes: 1 addition & 1 deletion go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ require (
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/protobuf v1.5.3 // indirect
github.com/google/btree v1.1.2 // indirect
github.com/google/gnostic v0.6.9 // indirect
github.com/google/gnostic v0.6.9
github.com/google/go-cmp v0.5.9 // indirect
github.com/google/go-containerregistry v0.14.0 // indirect
github.com/google/gofuzz v1.2.0 // indirect
Expand Down
25 changes: 20 additions & 5 deletions pkg/analysis/analysis.go
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ import (
"sync"

"github.com/fatih/color"
openapi_v2 "github.com/google/gnostic/openapiv2"
"github.com/k8sgpt-ai/k8sgpt/pkg/ai"
"github.com/k8sgpt-ai/k8sgpt/pkg/analyzer"
"github.com/k8sgpt-ai/k8sgpt/pkg/cache"
Expand All @@ -45,6 +46,7 @@ type Analysis struct {
Explain bool
MaxConcurrency int
AnalysisAIProvider string // The name of the AI Provider used for this analysis
WithDoc bool
}

type AnalysisStatus string
Expand All @@ -63,7 +65,7 @@ type JsonOutput struct {
Results []common.Result `json:"results"`
}

func NewAnalysis(backend string, language string, filters []string, namespace string, noCache bool, explain bool, maxConcurrency int) (*Analysis, error) {
func NewAnalysis(backend string, language string, filters []string, namespace string, noCache bool, explain bool, maxConcurrency int, withDoc bool) (*Analysis, error) {
var configAI ai.AIConfiguration
err := viper.UnmarshalKey("ai", &configAI)
if err != nil {
Expand Down Expand Up @@ -128,6 +130,7 @@ func NewAnalysis(backend string, language string, filters []string, namespace st
Explain: explain,
MaxConcurrency: maxConcurrency,
AnalysisAIProvider: backend,
WithDoc: withDoc,
}, nil
}

Expand All @@ -136,11 +139,23 @@ func (a *Analysis) RunAnalysis() {

coreAnalyzerMap, analyzerMap := analyzer.GetAnalyzerMap()

// we get the openapi schema from the server only if required by the flag "with-doc"
openapiSchema := &openapi_v2.Document{}
if a.WithDoc {
var openApiErr error

openapiSchema, openApiErr = a.Client.Client.Discovery().OpenAPISchema()
if openApiErr != nil {
a.Errors = append(a.Errors, fmt.Sprintf("[KubernetesDoc] %s", openApiErr))
}
}

analyzerConfig := common.Analyzer{
Client: a.Client,
Context: a.Context,
Namespace: a.Namespace,
AIClient: a.AIClient,
Client: a.Client,
Context: a.Context,
Namespace: a.Namespace,
AIClient: a.AIClient,
OpenapiSchema: openapiSchema,
}

semaphore := make(chan struct{}, a.MaxConcurrency)
Expand Down
3 changes: 3 additions & 0 deletions pkg/analysis/output.go
Original file line number Diff line number Diff line change
Expand Up @@ -78,6 +78,9 @@ func (a *Analysis) textOutput() ([]byte, error) {
color.YellowString(result.Name), color.CyanString(result.ParentObject)))
for _, err := range result.Error {
output.WriteString(fmt.Sprintf("- %s %s\n", color.RedString("Error:"), color.RedString(err.Text)))
if err.KubernetesDoc != "" {
golgoth31 marked this conversation as resolved.
Show resolved Hide resolved
output.WriteString(fmt.Sprintf(" %s %s\n", color.RedString("Kubernetes Doc:"), color.RedString(err.KubernetesDoc)))
}
}
output.WriteString(color.GreenString(result.Details + "\n"))
}
Expand Down
24 changes: 21 additions & 3 deletions pkg/analyzer/cronjob.go
Original file line number Diff line number Diff line change
Expand Up @@ -18,16 +18,26 @@ import (
"time"

"github.com/k8sgpt-ai/k8sgpt/pkg/common"
"github.com/k8sgpt-ai/k8sgpt/pkg/kubernetes"
"github.com/k8sgpt-ai/k8sgpt/pkg/util"
cron "github.com/robfig/cron/v3"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
)

type CronJobAnalyzer struct{}

func (analyzer CronJobAnalyzer) Analyze(a common.Analyzer) ([]common.Result, error) {

kind := "CronJob"
apiDoc := kubernetes.K8sApiReference{
Kind: kind,
ApiVersion: schema.GroupVersion{
Group: "batch",
Version: "v1",
},
OpenapiSchema: a.OpenapiSchema,
}

AnalyzerErrorsMetric.DeletePartialMatch(map[string]string{
"analyzer_name": kind,
Expand All @@ -43,8 +53,11 @@ func (analyzer CronJobAnalyzer) Analyze(a common.Analyzer) ([]common.Result, err
for _, cronJob := range cronJobList.Items {
var failures []common.Failure
if cronJob.Spec.Suspend != nil && *cronJob.Spec.Suspend {
doc := apiDoc.GetApiDocV2("spec.suspend")

failures = append(failures, common.Failure{
Text: fmt.Sprintf("CronJob %s is suspended", cronJob.Name),
Text: fmt.Sprintf("CronJob %s is suspended", cronJob.Name),
KubernetesDoc: doc,
Sensitive: []common.Sensitive{
{
Unmasked: cronJob.Namespace,
Expand All @@ -59,8 +72,11 @@ func (analyzer CronJobAnalyzer) Analyze(a common.Analyzer) ([]common.Result, err
} else {
// check the schedule format
if _, err := CheckCronScheduleIsValid(cronJob.Spec.Schedule); err != nil {
doc := apiDoc.GetApiDocV2("spec.schedule")

failures = append(failures, common.Failure{
Text: fmt.Sprintf("CronJob %s has an invalid schedule: %s", cronJob.Name, err.Error()),
Text: fmt.Sprintf("CronJob %s has an invalid schedule: %s", cronJob.Name, err.Error()),
KubernetesDoc: doc,
Sensitive: []common.Sensitive{
{
Unmasked: cronJob.Namespace,
Expand All @@ -78,9 +94,11 @@ func (analyzer CronJobAnalyzer) Analyze(a common.Analyzer) ([]common.Result, err
if cronJob.Spec.StartingDeadlineSeconds != nil {
deadline := time.Duration(*cronJob.Spec.StartingDeadlineSeconds) * time.Second
if deadline < 0 {
doc := apiDoc.GetApiDocV2("spec.startingDeadlineSeconds")

failures = append(failures, common.Failure{
Text: fmt.Sprintf("CronJob %s has a negative starting deadline", cronJob.Name),
Text: fmt.Sprintf("CronJob %s has a negative starting deadline", cronJob.Name),
KubernetesDoc: doc,
Sensitive: []common.Sensitive{
{
Unmasked: cronJob.Namespace,
Expand Down
15 changes: 14 additions & 1 deletion pkg/analyzer/deployment.go
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,10 @@ import (
"fmt"

v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"

"github.com/k8sgpt-ai/k8sgpt/pkg/common"
"github.com/k8sgpt-ai/k8sgpt/pkg/kubernetes"
"github.com/k8sgpt-ai/k8sgpt/pkg/util"
)

Expand All @@ -31,6 +33,14 @@ type DeploymentAnalyzer struct {
func (d DeploymentAnalyzer) Analyze(a common.Analyzer) ([]common.Result, error) {

kind := "Deployment"
apiDoc := kubernetes.K8sApiReference{
Kind: kind,
ApiVersion: schema.GroupVersion{
Group: "apps",
Version: "v1",
},
OpenapiSchema: a.OpenapiSchema,
}

AnalyzerErrorsMetric.DeletePartialMatch(map[string]string{
"analyzer_name": kind,
Expand All @@ -45,8 +55,11 @@ func (d DeploymentAnalyzer) Analyze(a common.Analyzer) ([]common.Result, error)
for _, deployment := range deployments.Items {
var failures []common.Failure
if *deployment.Spec.Replicas != deployment.Status.Replicas {
doc := apiDoc.GetApiDocV2("spec.replicas")

failures = append(failures, common.Failure{
Text: fmt.Sprintf("Deployment %s/%s has %d replicas but %d are available", deployment.Namespace, deployment.Name, *deployment.Spec.Replicas, deployment.Status.Replicas),
Text: fmt.Sprintf("Deployment %s/%s has %d replicas but %d are available", deployment.Namespace, deployment.Name, *deployment.Spec.Replicas, deployment.Status.Replicas),
KubernetesDoc: doc,
Sensitive: []common.Sensitive{
{
Unmasked: deployment.Namespace,
Expand Down
20 changes: 18 additions & 2 deletions pkg/analyzer/hpa.go
Original file line number Diff line number Diff line change
Expand Up @@ -17,17 +17,27 @@ import (
"fmt"

"github.com/k8sgpt-ai/k8sgpt/pkg/common"
"github.com/k8sgpt-ai/k8sgpt/pkg/kubernetes"
"github.com/k8sgpt-ai/k8sgpt/pkg/util"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
)

type HpaAnalyzer struct{}

func (HpaAnalyzer) Analyze(a common.Analyzer) ([]common.Result, error) {

kind := "HorizontalPodAutoscaler"
apiDoc := kubernetes.K8sApiReference{
Kind: kind,
ApiVersion: schema.GroupVersion{
Group: "autoscaling",
Version: "v1",
},
OpenapiSchema: a.OpenapiSchema,
}

AnalyzerErrorsMetric.DeletePartialMatch(map[string]string{
"analyzer_name": kind,
Expand Down Expand Up @@ -76,8 +86,11 @@ func (HpaAnalyzer) Analyze(a common.Analyzer) ([]common.Result, error) {
}

if podInfo == nil {
doc := apiDoc.GetApiDocV2("spec.scaleTargetRef")

failures = append(failures, common.Failure{
Text: fmt.Sprintf("HorizontalPodAutoscaler uses %s/%s as ScaleTargetRef which does not exist.", scaleTargetRef.Kind, scaleTargetRef.Name),
Text: fmt.Sprintf("HorizontalPodAutoscaler uses %s/%s as ScaleTargetRef which does not exist.", scaleTargetRef.Kind, scaleTargetRef.Name),
KubernetesDoc: doc,
Sensitive: []common.Sensitive{
{
Unmasked: scaleTargetRef.Name,
Expand All @@ -94,8 +107,11 @@ func (HpaAnalyzer) Analyze(a common.Analyzer) ([]common.Result, error) {
}

if containers <= 0 {
doc := apiDoc.GetApiDocV2("spec.scaleTargetRef.kind")

failures = append(failures, common.Failure{
Text: fmt.Sprintf("%s %s/%s does not have resource configured.", scaleTargetRef.Kind, a.Namespace, scaleTargetRef.Name),
Text: fmt.Sprintf("%s %s/%s does not have resource configured.", scaleTargetRef.Kind, a.Namespace, scaleTargetRef.Name),
KubernetesDoc: doc,
Sensitive: []common.Sensitive{
{
Unmasked: scaleTargetRef.Name,
Expand Down
Loading