Skip to content

fhnw-imvs/fhnw-ansible-os-eval

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AI-Based Operating System Evaluation

This project automates the process of gathering information from managed nodes, evaluating best practices using language models (LLMs), and generating comprehensive reports. The system is designed to facilitate compliance checks and recommend improvements by leveraging advanced AI workflows. The project is structured in two main phases: data gathering and AI-driven evaluation. It utilizes Ansible for automation, ensuring efficient and scalable operations across diverse environments. Key features include dynamic configuration, seamless integration with various LLMs, and humanreadable final reports.

The code is in the directory ./ansible

Supported models

For the online LLMs, we included the following two models: supported_models.png

For offline use, we support Llama 3.1 too running on Ollama.

Final evaluation report

The final report looks like this:

# Report for node01
## Category: security
### ssh config: no
  - Issue: Port 22 is not explicitly set, it's best to set this explicitly.
  - Fix: Uncomment and set the Port to 22.

  - Issue: PermitRootLogin is not set, it's recommended to set this to no to prevent root logins.
  - Fix: Set PermitRootLogin to no.

  - Issue: PasswordAuthentication is set to no, it's recommended to set this to yes for a more secure configuration.
  - Fix: Set PasswordAuthentication to yes.
### ssh key permissions: yes
...

Documentation for initial setup and how to use

Documentation for evaluation-tool role:

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published