Skip to content

Commit

Permalink
Merge pull request #51 from RedHatQE/few_improvements
Browse files Browse the repository at this point in the history
BREAKING CHANGE: Making settings more precise
  • Loading branch information
jyejare authored Sep 26, 2022
2 parents f632888 + d766deb commit 022b5d1
Show file tree
Hide file tree
Showing 17 changed files with 282 additions and 175 deletions.
4 changes: 2 additions & 2 deletions .github/workflows/new_release.yml
Original file line number Diff line number Diff line change
Expand Up @@ -101,6 +101,6 @@ jobs:
uses: docker/build-push-action@v2
with:
context: .
file: ./Dockerfile.stable
file: ./Dockerfile.dev
push: true
tags: quay.io/redhatqe/cloudwash:latest
tags: quay.io/redhatqe/cloudwash:latest,quay.io/redhatqe/cloudwash:v${{ github.ref_name }}
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -7,3 +7,5 @@ settings.yaml
**/cleanup*.log
*.pyc
dist/
build/
conf/*.yaml
1 change: 1 addition & 0 deletions Dockerfile.dev
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ RUN git clone --depth=1 https://github.com/RedHatQE/cloudwash.git && \
pip install --upgrade pip && \
pip install . && \
cp settings.yaml.template settings.yaml
for conffile in conf/*.yaml.template; do cp -- "$conffile" "${conffile%.yaml.template}.yaml"; done
# Workaround for the issue DistributionNotFound: The 'azure-mgmt-media~=1.0.0rc2' distribution was not found and is required by azure-mgmt
# RUN pip install azure-mgmt-media==1.0.0rc2

Expand Down
32 changes: 0 additions & 32 deletions Dockerfile.stable

This file was deleted.

23 changes: 10 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,7 @@ cloudwash supports following cloud providers:
* VMWare vCenter (_Support yet To be added_)
* OCP Clusters deplyed on Public clouds (_Support yet To be added_)

The list of resources it helps to clean are:

> VMs, Network Interfaces, Public IPs, Disks, Azure Resource group and more.
The list of resource types it helps to clean could be found under settings.yaml.template](https://github.com/RedHatQE/cloudwash/blob/master/settings.yaml.template) file for individual cloud providers along with cleanup criteria.

## Installation

Expand Down Expand Up @@ -51,22 +49,21 @@ $ pip install cloudwash
### Docker Image Installation

#### From Container image registry
The [container image](https://quay.io/repository/redhatqe/cloudwash) for cloudwash is available in quay. This image provides the cloudwash installed from released python package.
The [container image](https://quay.io/repository/redhatqe/cloudwash) for cloudwash is available in quay. This image provides the cloudwash installed from released python package with release version tags. Latest tag always points to the latest released version tag.

#### Build from local DockerFile
This github repo contains two set of docker files, use any container building service to build from the dockerfile:
This github repo contains a DockerFile, use any container building service to build from the dockerfile:

1. **Stable Dockerfile** - Build container from `Dockerfile.stable` that should build a container from released python package of cloudwash. This would be very similar to quiy.io image above.
2. **Development Dockerfile** - Build container from `Dockerfile.dev` that should build a container from the cloudwash github master branch giving the access to pre-released features.
Build container from `Dockerfile.dev` that should build a container from the cloudwash github master branch giving the access to pre-released features.


### OC BuildConfig Installation
This github repo provides the ready to use BuildConfig on OCP / Kubernetes. The build config should create buildconfigs to build stable and dev container images. Use those image to build cloudwash pod.
This github repo provides the ready to use BuildConfig on OCP / Kubernetes. The build config should create buildconfig to build master branch based container image. Use the image to build cloudwash pod.


## Configuration

The `cloudwash` uses the `DynaConf` configuration python module to access the data in `settings.yaml`, it also allows an unique way of declaring secrets via Environment variables instead of putting in plain `settings.yaml`.
The `cloudwash` uses the `DynaConf` configuration python module to access the data in `settings.yaml` or conf directory settings, it also allows an unique way of declaring secrets via Environment variables instead of putting in plain `settings.yaml`.

e.g: The Azure password field can be set via environment variable by exporting the environment variable

Expand All @@ -76,7 +73,7 @@ e.g: The Azure password field can be set via environment variable by exporting t

#### Configuration with PyPi package:

Copy `settings.yaml.template` to local `~/cloudwash` directory as `settings.yaml`, update it with the cloud provider credentials and other configuration details for successful resource reporting and cleanup.
Copy/Download `settings.yaml.template` to local `~/cloudwash` directory as `settings.yaml`, update it with the cloud provider credentials and other configuration details for successful resource reporting and cleanup.


#### Configuration with cloudwash container images:
Expand All @@ -96,15 +93,15 @@ _Or_ - Export/Set the environment variables for all or only sensitive credential
Usage: swach [OPTIONS] COMMAND [ARGS]...
A Cleanup Utility to remove the VMs, Discs and Nics from Providers!
A Cleanup Utility to remove cloud resources from cloud Providers!
Options:
-d, --dry Only show what will be removed from Providers!
--help Show this message and exit.
Commands:
azure Cleanup Azure provider
ec2 Cleanup Amazon provider
aws Cleanup Amazon provider
gce Cleanup GCE provider
openstack Cleanup OSP provider
rhev Cleanup RHEV provider
Expand All @@ -131,7 +128,7 @@ Options:
```

* Cleanup Dry Run (Monitor only mode):
* Cleanup Dry Run (Monitor only mode using option `-d`):

```
# swach -d azure --all
Expand Down
6 changes: 3 additions & 3 deletions cloudwash/cli.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@

from cloudwash.config import validate_provider
from cloudwash.logger import logger
from cloudwash.providers.aws import cleanup as awsCleanup
from cloudwash.providers.azure import cleanup as azureCleanup
from cloudwash.providers.ec2 import cleanup as ec2Cleanup
from cloudwash.providers.gce import cleanup as gceCleanup

# Adding the pythonpath for importing modules from cloudwash packages
Expand Down Expand Up @@ -82,11 +82,11 @@ def azure(ctx, vms, discs, nics, pips, _all, _all_rg):
@common_options
@click.option("--pips", is_flag=True, help="Remove only Public IPs from the provider")
@click.pass_context
def ec2(ctx, vms, discs, nics, pips, _all):
def aws(ctx, vms, discs, nics, pips, _all):
# Validate Amazon Settings
validate_provider(ctx.command.name)
is_dry_run = ctx.parent.params["dry"]
ec2Cleanup(vms=vms, discs=discs, nics=nics, pips=pips, _all=_all, dry_run=is_dry_run)
awsCleanup(vms=vms, discs=discs, nics=nics, pips=pips, _all=_all, dry_run=is_dry_run)


@cleanup_providers.command(help="Cleanup VMWare provider")
Expand Down
24 changes: 12 additions & 12 deletions cloudwash/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,26 +13,26 @@ def compute_client(compute_resource, **kwargs):
"""
if compute_resource == "azure":
client = wrapanapi.AzureSystem(
username=settings.providers.azure.username,
password=settings.providers.azure.password,
tenant_id=settings.providers.azure.tenant_id,
subscription_id=settings.providers.azure.subscription_id,
username=settings.azure.auth.client_id,
password=settings.azure.auth.secret_id,
tenant_id=settings.azure.auth.tenant_id,
subscription_id=settings.azure.auth.subscription_id,
provisioning={
"resource_group": settings.providers.azure.resource_group,
"resource_group": settings.azure.auth.resource_group,
"template_container": None,
"region_api": settings.providers.azure.region,
"region_api": settings.azure.auth.region,
},
)
elif compute_resource == "gce":
client = wrapanapi.GoogleCloudSystem(
project=settings.providers.gce.project_id,
service_account=json.loads(settings.providers.gce.service_account),
project=settings.gce.auth.project_id,
service_account=json.loads(settings.gce.auth.service_account),
)
elif compute_resource == "ec2":
elif compute_resource == "aws":
client = wrapanapi.EC2System(
username=settings.providers.ec2.username,
password=settings.providers.ec2.password,
region=kwargs['ec2_region'],
username=settings.aws.auth.access_key,
password=settings.aws.auth.secret_key,
region=kwargs['aws_region'],
)
else:
raise ValueError(
Expand Down
11 changes: 9 additions & 2 deletions cloudwash/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,20 @@
from cloudwash.logger import logger

# Initialize and Configure Settings
settings = Dynaconf(envvar_prefix="CLEANUP", settings_file="settings.yaml")
settings = Dynaconf(
core_loaders=["YAML"],
envvar_prefix="CLEANUP",
settings_file="settings.yaml",
preload=["conf/*.yaml"],
envless_mode=True,
lowercase_read=True,
)


def validate_provider(provider_name):
provider = provider_name.upper()
provider_settings = [
f"{provider}.{setting_key}" for setting_key in settings.to_dict()["PROVIDERS"].get(provider)
f"{provider}.{setting_key}" for setting_key in settings.to_dict().get(provider)
]
settings.validators.register(Validator(*provider_settings, ne=None))
try:
Expand Down
59 changes: 34 additions & 25 deletions cloudwash/providers/ec2.py → cloudwash/providers/aws.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,52 +11,61 @@ def cleanup(**kwargs):

is_dry_run = kwargs["dry_run"]
data = ['VMS', 'DISCS', 'PIPS', 'RESOURCES']
regions = settings.providers.ec2.regions
with compute_client("ec2", ec2_region="us-west-2") as client:
regions = settings.aws.auth.regions
with compute_client("aws", aws_region="us-west-2") as client:
if "all" in regions:
regions = client.list_regions()
for region in regions:
dry_data['VMS']['stop'] = []
dry_data['VMS']['skip'] = []
for items in data:
dry_data[items]['delete'] = []
with compute_client("ec2", ec2_region=region) as ec2_client:
with compute_client("aws", aws_region=region) as aws_client:
# Dry Data Collection Defs
def dry_vms():
all_vms = ec2_client.list_vms()
all_vms = aws_client.list_vms()
for vm in all_vms:
if vm.name in settings.providers.ec2.except_vm_list:
if vm.name in settings.aws.exception.vm.vm_list:
dry_data["VMS"]["skip"].append(vm.name)
continue
elif total_running_time(vm).minutes >= settings.sla_minutes:
if vm.name in settings.providers.ec2.except_vm_stop_list:
elif total_running_time(vm).minutes >= settings.aws.criteria.vm.sla_minutes:
if vm.name in settings.aws.exception.vm.stop_list:
dry_data["VMS"]["stop"].append(vm.name)
continue
elif vm.name.startswith(settings.delete_vm):
elif vm.name.startswith(settings.aws.criteria.vm.delete_vm):
dry_data["VMS"]["delete"].append(vm.name)
return dry_data["VMS"]

def dry_nics():
rnics = ec2_client.get_all_unused_network_interfaces()
[dry_data["NICS"]["delete"].append(dnic["NetworkInterfaceId"]) for dnic in rnics]
return dry_data["NICS"]["delete"]
rnics = []
if settings.aws.criteria.nic.unassigned:
rnics = aws_client.get_all_unused_network_interfaces()
[
dry_data["NICS"]["delete"].append(dnic["NetworkInterfaceId"])
for dnic in rnics
]
return rnics

def dry_discs():
rdiscs = ec2_client.get_all_unattached_volumes()
[dry_data["DISCS"]["delete"].append(ddisc["VolumeId"]) for ddisc in rdiscs]
return dry_data["DISCS"]["delete"]
rdiscs = []
if settings.aws.criteria.disc.unassigned:
rdiscs = aws_client.get_all_unattached_volumes()
[dry_data["DISCS"]["delete"].append(ddisc["VolumeId"]) for ddisc in rdiscs]
return rdiscs

def dry_pips():
rpips = ec2_client.get_all_disassociated_addresses()
[dry_data["PIPS"]["delete"].append(dpip["AllocationId"]) for dpip in rpips]
return dry_data["PIPS"]["delete"]
rpips = []
if settings.aws.criteria.public_ip.unassigned:
rpips = aws_client.get_all_disassociated_addresses()
[dry_data["PIPS"]["delete"].append(dpip["AllocationId"]) for dpip in rpips]
return rpips

# Remove / Stop VMs
def remove_vms(avms):
# Remove VMs
[ec2_client.get_vm(vm_name).delete() for vm_name in avms["delete"]]
[aws_client.get_vm(vm_name).delete() for vm_name in avms["delete"]]
# Stop VMs
[ec2_client.get_vm(vm_name).stop() for vm_name in avms["stop"]]
[aws_client.get_vm(vm_name).stop() for vm_name in avms["stop"]]

# Actual Cleaning and dry execution
logger.info(f"\nResources from the region: {region}")
Expand All @@ -69,18 +78,18 @@ def remove_vms(avms):
logger.info(f"Skipped VMs: \n{avms['skip']}")
if kwargs["nics"] or kwargs["_all"]:
rnics = dry_nics()
if not is_dry_run:
ec2_client.remove_all_unused_nics()
if not is_dry_run and rnics:
aws_client.remove_all_unused_nics()
logger.info(f"Removed NICs: \n{rnics}")
if kwargs["discs"] or kwargs["_all"]:
rdiscs = dry_discs()
if not is_dry_run:
ec2_client.remove_all_unused_volumes()
if not is_dry_run and rdiscs:
aws_client.remove_all_unused_volumes()
logger.info(f"Removed Discs: \n{rdiscs}")
if kwargs["pips"] or kwargs["_all"]:
rpips = dry_pips()
if not is_dry_run:
ec2_client.remove_all_unused_ips()
if not is_dry_run and rpips:
aws_client.remove_all_unused_ips()
logger.info(f"Removed PIPs: \n{rpips}")
if is_dry_run:
echo_dry(dry_data)
Loading

0 comments on commit 022b5d1

Please sign in to comment.