Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AWS use spot instances #1

Closed
slivne opened this issue Jan 7, 2016 · 2 comments
Closed

AWS use spot instances #1

slivne opened this issue Jan 7, 2016 · 2 comments

Comments

@slivne
Copy link
Contributor

slivne commented Jan 7, 2016

No description provided.

@lmr
Copy link
Contributor

lmr commented Jan 25, 2016

@slivne, I keep meaning to ask, can you describe this better? I'm not familiar with the term 'spot instances'.

@lmr
Copy link
Contributor

lmr commented Aug 8, 2016

Now we can run tests on libvirt, I think there's less of a need to use spot instances. I'm closing this issue.

@lmr lmr closed this as completed Aug 8, 2016
amoskong pushed a commit to amoskong/scylla-cluster-tests that referenced this issue Jun 14, 2018
ShlomiBalalis added a commit to ShlomiBalalis/scylla-cluster-tests that referenced this issue Sep 23, 2019
ShlomiBalalis added a commit to ShlomiBalalis/scylla-cluster-tests that referenced this issue Sep 23, 2019
juliayakovlev added a commit to juliayakovlev/scylla-cluster-tests that referenced this issue Mar 15, 2021
juliayakovlev added a commit to juliayakovlev/scylla-cluster-tests that referenced this issue Mar 15, 2021
juliayakovlev added a commit to juliayakovlev/scylla-cluster-tests that referenced this issue Mar 15, 2021
juliayakovlev added a commit to juliayakovlev/scylla-cluster-tests that referenced this issue Mar 15, 2021
juliayakovlev added a commit to juliayakovlev/scylla-cluster-tests that referenced this issue Mar 15, 2021
juliayakovlev added a commit to juliayakovlev/scylla-cluster-tests that referenced this issue Mar 16, 2021
juliayakovlev added a commit to juliayakovlev/scylla-cluster-tests that referenced this issue Mar 16, 2021
juliayakovlev added a commit to juliayakovlev/scylla-cluster-tests that referenced this issue Mar 16, 2021
juliayakovlev added a commit to juliayakovlev/scylla-cluster-tests that referenced this issue Mar 21, 2021
juliayakovlev added a commit to juliayakovlev/scylla-cluster-tests that referenced this issue Mar 22, 2021
Test Asymmetric clusters:
- bootstrapping an asymmetric cluster
- adding asymmetric nodes to cluster

SMP selection:
- minimum SMP should be calculated as 50% of max SMP
- max SMP as it is calculated now by default
- new parameter: db_nodes_shards_selection (default | random)

Add 2 new longevities to use it:
1. That will be based on the large-paritions-8h but to be shortened to 3 hours and will be run
daily (like all others).
2. another longevity that will be based on 200gb-48h but to be shorten to 12h.

Task:
https://trello.com/c/WYdqMLgp/2672-test-asymmetric-clusters-bootstrapping-an-asymmetric-cluster-
adding-asymmetric-nodes-to-cluster-customer-issue

feature(asymmertic cluster): address comments scylladb#1

feature(asymmertic cluster): address comments scylladb#2

feature(asymmertic cluster): address comments scylladb#3
bentsi pushed a commit that referenced this issue Mar 22, 2021
Test Asymmetric clusters:
- bootstrapping an asymmetric cluster
- adding asymmetric nodes to cluster

SMP selection:
- minimum SMP should be calculated as 50% of max SMP
- max SMP as it is calculated now by default
- new parameter: db_nodes_shards_selection (default | random)

Add 2 new longevities to use it:
1. That will be based on the large-paritions-8h but to be shortened to 3 hours and will be run
daily (like all others).
2. another longevity that will be based on 200gb-48h but to be shorten to 12h.

Task:
https://trello.com/c/WYdqMLgp/2672-test-asymmetric-clusters-bootstrapping-an-asymmetric-cluster-
adding-asymmetric-nodes-to-cluster-customer-issue

feature(asymmertic cluster): address comments #1

feature(asymmertic cluster): address comments #2

feature(asymmertic cluster): address comments #3
juliayakovlev added a commit to juliayakovlev/scylla-cluster-tests that referenced this issue Mar 29, 2021
juliayakovlev added a commit to juliayakovlev/scylla-cluster-tests that referenced this issue Mar 29, 2021
juliayakovlev added a commit to juliayakovlev/scylla-cluster-tests that referenced this issue Mar 29, 2021
juliayakovlev added a commit to juliayakovlev/scylla-cluster-tests that referenced this issue Mar 29, 2021
juliayakovlev added a commit to juliayakovlev/scylla-cluster-tests that referenced this issue Mar 29, 2021
juliayakovlev added a commit to juliayakovlev/scylla-cluster-tests that referenced this issue Mar 29, 2021
juliayakovlev added a commit to juliayakovlev/scylla-cluster-tests that referenced this issue Mar 29, 2021
juliayakovlev added a commit to juliayakovlev/scylla-cluster-tests that referenced this issue Mar 30, 2021
Transfer nemesis name in 'decommission' method that called by ShrinkCluster
nemesis. This info will be printed in the email for terminated nodes list
in the column 'Terminated by nemesis'"

fix(shrink cluster): address comments scylladb#1
juliayakovlev added a commit to juliayakovlev/scylla-cluster-tests that referenced this issue Apr 21, 2021
dkropachev added a commit to dkropachev/scylla-cluster-tests that referenced this issue Jul 13, 2021
We use kubectl wait to wait till all resources get into proper state
there are two problems with this:
1. kubectl wait fails when no resource matched criteria
2. if resources are provisioned gradually, kubectl wait can slip
  thrue crack when half of the resource provisioned and the rest
  is not even deployed

At some places in sct we use sleeps to tackle scylladb#1, which leads to failures on slow PC
This PR is to address this problem by wrapping kubectl wait and make it
restarted when no resource are there and track number of resources it reported
and wait+rerun if resource number had changed.
dkropachev added a commit to dkropachev/scylla-cluster-tests that referenced this issue Jul 13, 2021
We use kubectl wait to wait till all resources get into proper state
there are two problems with this:
1. kubectl wait fails when no resource matched criteria
2. if resources are provisioned gradually, kubectl wait can slip
  thrue crack when half of the resource provisioned and the rest
  is not even deployed

At some places in sct we use sleeps to tackle scylladb#1, which leads to failures on slow PC
This PR is to address this problem by wrapping kubectl wait and make it
restarted when no resource are there and track number of resources it reported
and wait+rerun if resource number had changed.
dkropachev added a commit to dkropachev/scylla-cluster-tests that referenced this issue Jul 13, 2021
We use kubectl wait to wait till all resources get into proper state
there are two problems with this:
1. kubectl wait fails when no resource matched criteria
2. if resources are provisioned gradually, kubectl wait can slip
  thrue crack when half of the resource provisioned and the rest
  is not even deployed

At some places in sct we use sleeps to tackle scylladb#1, which leads to failures on slow PC
This PR is to address this problem by wrapping kubectl wait and make it
restarted when no resource are there and track number of resources it reported
and wait+rerun if resource number had changed.
dkropachev added a commit to dkropachev/scylla-cluster-tests that referenced this issue Jul 14, 2021
We use kubectl wait to wait till all resources get into proper state
there are two problems with this:
1. kubectl wait fails when no resource matched criteria
2. if resources are provisioned gradually, kubectl wait can slip
  thrue crack when half of the resource provisioned and the rest
  is not even deployed

At some places in sct we use sleeps to tackle scylladb#1, which leads to failures on slow PC
This PR is to address this problem by wrapping kubectl wait and make it
restarted when no resource are there and track number of resources it reported
and wait+rerun if resource number had changed.
dkropachev added a commit to dkropachev/scylla-cluster-tests that referenced this issue Jul 14, 2021
We use kubectl wait to wait till all resources get into proper state
there are two problems with this:
1. kubectl wait fails when no resource matched criteria
2. if resources are provisioned gradually, kubectl wait can slip
  thrue crack when half of the resource provisioned and the rest
  is not even deployed

At some places in sct we use sleeps to tackle scylladb#1, which leads to failures on slow PC
This PR is to address this problem by wrapping kubectl wait and make it
restarted when no resource are there and track number of resources it reported
and wait+rerun if resource number had changed.
dkropachev added a commit to dkropachev/scylla-cluster-tests that referenced this issue Jul 14, 2021
We use kubectl wait to wait till all resources get into proper state
there are two problems with this:
1. kubectl wait fails when no resource matched criteria
2. if resources are provisioned gradually, kubectl wait can slip
  thrue crack when half of the resource provisioned and the rest
  is not even deployed

At some places in sct we use sleeps to tackle scylladb#1, which leads to failures on slow PC
This PR is to address this problem by wrapping kubectl wait and make it
restarted when no resource are there and track number of resources it reported
and wait+rerun if resource number had changed.
bentsi pushed a commit that referenced this issue Jul 14, 2021
We use kubectl wait to wait till all resources get into proper state
there are two problems with this:
1. kubectl wait fails when no resource matched criteria
2. if resources are provisioned gradually, kubectl wait can slip
  thrue crack when half of the resource provisioned and the rest
  is not even deployed

At some places in sct we use sleeps to tackle #1, which leads to failures on slow PC
This PR is to address this problem by wrapping kubectl wait and make it
restarted when no resource are there and track number of resources it reported
and wait+rerun if resource number had changed.
vponomaryov pushed a commit to vponomaryov/scylla-cluster-tests that referenced this issue Oct 22, 2021
vponomaryov pushed a commit to vponomaryov/scylla-cluster-tests that referenced this issue Oct 22, 2021
fruch pushed a commit that referenced this issue Dec 24, 2024
Fix changes directory for startup script upload from /tmp to $HOME.
The change is required for DB nodes deployed in Cloud where /tmp dir
is mounted with noexec option what makes script execution impossible
there.

As per discussion (#1), for DB nodes deployed in SCT startup_script
can be executed either from $HOME or /tmp.

refs:
#1: #9608
mikliapko added a commit that referenced this issue Dec 27, 2024
Fix changes directory for startup script upload from /tmp to $HOME.
The change is required for DB nodes deployed in Cloud where /tmp dir
is mounted with noexec option what makes script execution impossible
there.

As per discussion (#1), for DB nodes deployed in SCT startup_script
can be executed either from $HOME or /tmp.

refs:
#1: #9608
fruch pushed a commit that referenced this issue Jan 2, 2025
Temporary workaround for docker installation on RHEL9 distro
because of the issue (1).

In provided fix docker packages are installed manually for repo,
hardcoding the OS version ($releasever) to specific value in
repo file /etc/yum.repos.d/docker-ce.repo.

After issue resolution, we can return the previous approach with
installation script.

refs:
#1: moby/moby#49169
fruch pushed a commit that referenced this issue Jan 2, 2025
Firewall should be disabled for RHEL-like distributions. Otherwise, it
blocks incoming requests to 3000 monitoring node (1).

The same operation has been already implemented for db nodes setup and
only refactored here.

refs:
#1: #9630
vponomaryov pushed a commit that referenced this issue Jan 7, 2025
Error message Manager returns for enospc scenario has been changed to
more generic one (#1). So, it doesn't make much sense to verify it.

Moreover, there is a plan to fix check free disk space behaviour and
the whole test will probably require rework to be done (#2).

refs:
#1 - scylladb/scylla-manager#4087
#2 - scylladb/scylla-manager#4184
mikliapko added a commit that referenced this issue Jan 7, 2025
According to comment (1), set ks strategy and rf is not needed if
restoring the schema within one DC.

It should be brought back after implementation of (2) which will
unblock schema restore into a different DC. For now, it's possible
to restore schema only within one DC.

Refs:
#1: scylladb/scylla-manager#4041
issuecomment-2565489699
#2: scylladb/scylla-manager#4049
mergify bot pushed a commit that referenced this issue Jan 7, 2025
According to comment (1), set ks strategy and rf is not needed if
restoring the schema within one DC.

It should be brought back after implementation of (2) which will
unblock schema restore into a different DC. For now, it's possible
to restore schema only within one DC.

Refs:
#1: scylladb/scylla-manager#4041
issuecomment-2565489699
#2: scylladb/scylla-manager#4049

(cherry picked from commit b161890)
mergify bot pushed a commit that referenced this issue Jan 7, 2025
According to comment (1), set ks strategy and rf is not needed if
restoring the schema within one DC.

It should be brought back after implementation of (2) which will
unblock schema restore into a different DC. For now, it's possible
to restore schema only within one DC.

Refs:
#1: scylladb/scylla-manager#4041
issuecomment-2565489699
#2: scylladb/scylla-manager#4049

(cherry picked from commit b161890)
mergify bot pushed a commit that referenced this issue Jan 7, 2025
According to comment (1), set ks strategy and rf is not needed if
restoring the schema within one DC.

It should be brought back after implementation of (2) which will
unblock schema restore into a different DC. For now, it's possible
to restore schema only within one DC.

Refs:
#1: scylladb/scylla-manager#4041
issuecomment-2565489699
#2: scylladb/scylla-manager#4049

(cherry picked from commit b161890)
mikliapko added a commit that referenced this issue Jan 7, 2025
According to comment (1), set ks strategy and rf is not needed if
restoring the schema within one DC.

It should be brought back after implementation of (2) which will
unblock schema restore into a different DC. For now, it's possible
to restore schema only within one DC.

Refs:
#1: scylladb/scylla-manager#4041
issuecomment-2565489699
#2: scylladb/scylla-manager#4049

(cherry picked from commit b161890)
mikliapko added a commit that referenced this issue Jan 7, 2025
According to comment (1), set ks strategy and rf is not needed if
restoring the schema within one DC.

It should be brought back after implementation of (2) which will
unblock schema restore into a different DC. For now, it's possible
to restore schema only within one DC.

Refs:
#1: scylladb/scylla-manager#4041
issuecomment-2565489699
#2: scylladb/scylla-manager#4049

(cherry picked from commit b161890)
mikliapko added a commit that referenced this issue Jan 7, 2025
According to comment (1), set ks strategy and rf is not needed if
restoring the schema within one DC.

It should be brought back after implementation of (2) which will
unblock schema restore into a different DC. For now, it's possible
to restore schema only within one DC.

Refs:
#1: scylladb/scylla-manager#4041
issuecomment-2565489699
#2: scylladb/scylla-manager#4049

(cherry picked from commit b161890)
fruch pushed a commit that referenced this issue Jan 8, 2025
According to comment (1), set ks strategy and rf is not needed if
restoring the schema within one DC.

It should be brought back after implementation of (2) which will
unblock schema restore into a different DC. For now, it's possible
to restore schema only within one DC.

Refs:
#1: scylladb/scylla-manager#4041
issuecomment-2565489699
#2: scylladb/scylla-manager#4049

(cherry picked from commit b161890)
fruch pushed a commit that referenced this issue Jan 8, 2025
According to comment (1), set ks strategy and rf is not needed if
restoring the schema within one DC.

It should be brought back after implementation of (2) which will
unblock schema restore into a different DC. For now, it's possible
to restore schema only within one DC.

Refs:
#1: scylladb/scylla-manager#4041
issuecomment-2565489699
#2: scylladb/scylla-manager#4049

(cherry picked from commit b161890)
fruch pushed a commit that referenced this issue Jan 8, 2025
According to comment (1), set ks strategy and rf is not needed if
restoring the schema within one DC.

It should be brought back after implementation of (2) which will
unblock schema restore into a different DC. For now, it's possible
to restore schema only within one DC.

Refs:
#1: scylladb/scylla-manager#4041
issuecomment-2565489699
#2: scylladb/scylla-manager#4049

(cherry picked from commit b161890)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants