Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change username/password #3603

Closed
snthibaud opened this issue Jan 29, 2018 · 19 comments
Closed

Change username/password #3603

snthibaud opened this issue Jan 29, 2018 · 19 comments

Comments

@snthibaud
Copy link

snthibaud commented Jan 29, 2018

I am trying out Vitess on Kubernetes and have successfully set up a cluster, but now I want to access it from outside. I took a look at the guestbook example and discovered that is connects without password from within the cluster. The cluster also seems to have a MySQL port exposed (port 3306). I would like to connect over that port and I assume I should use the 'vt_app' user to connect, but I think I somehow have to change the password (it does not work without password and if it would, the whole world would be able to access it).
How should I change this password so that I can access the database from the outside?

@derekperkins
Copy link
Member

@snthibaud I'm not sure how you installed Vitess. We're currently upgrading the Vitess helm chart, and it has instructions for enabling MySQL authentication.

If you want to set it up manually, here are the appropriate flags and file structure.

vtgate flags

-mysql_server_port=3306
-mysql_auth_server_static_file="/mysqlcreds/creds.json"
-mysql_server_socket_path $mysql_server_socket_path # optional

mysql_auth_server_static_file format

the myusername needs to be repeated as shown

{
  "myusername": [
    {
      "UserData": "myusername",
      "Password": "somepassword"
    }
  ],
  "vt_appdebug": []
}

@snthibaud
Copy link
Author

@derekperkins Thank you! I have been trying the Helm approach, but got stuck on the error Error: apiVersion "autoscaling/v2beta1" in vitess/templates/vitess.yaml is not available. Any idea?

@derekperkins
Copy link
Member

derekperkins commented Jan 29, 2018

Ok, that api went from autoscaling/v2alpha1 to autoscaling/v2beta1 in 1.8, so I'm guessing that you're running 1.7? Try adding a new maxReplicas line into your topology and have it match the vtgate.replicas number, then the chart won't add an HPA.

topology:
  cells:
    - name: "zone1"
      etcd:
        replicas: 3
      vtctld:
        replicas: 1
      vtgate:
        replicas: 3
        maxReplicas: 3

@snthibaud
Copy link
Author

I created a 7-instance cluster, installed etcd-operator and set maxReplicas to 3 as you suggested and the helm installation seems to finish, but the vtgate doesn't come up. The logs show this:


vtgate | 2018-01-30T02:29:46.727208674Z | F0130 02:29:46.722902 1 vtgate.go:158] gateway.WaitForTablets failed: node doesn't exist
-- | -- | --
vtgate | 2018-01-30T02:29:46.727190759Z | E0130 02:29:46.722794 1 resilient_server.go:198] GetSrvKeyspaceNames(context.Background.WithDeadline(2018-01-30 02:30:16.72025092 +0000 UTC m=+30.022198954 [29.997449941s]), zone1) failed: node doesn't exist (no cached value, caching and returning error)
vtgate | 2018-01-30T02:29:46.722401974Z | E0130 02:29:46.722293 1 topology_watcher.go:157] cannot get tablets for cell: zone1: node doesn't exist
vtgate | 2018-01-30T02:29:46.720234318Z | I0130 02:29:46.720184 1 gateway.go:90] Gateway waiting for serving tablets...
vtgate | 2018-01-30T02:29:46.720136669Z | I0130 02:29:46.720086 1 discoverygateway.go:104] loading tablets for cells: zone1
vtgate | 2018-01-30T02:29:46.719913560Z | I0130 02:29:46.719782 1 buffer.go:144] vtgate buffer not enabled.
vtgate | 2018-01-30T02:29:46.701470038Z | ERROR: logging before flag.Parse: E0130 02:29:46.701246 1 syslogger.go:122] can't connect to syslog
vtgate | 2018-01-30T02:29:46.696788870Z | ++ exec /vt/bin/vtgate -topo_implementation=etcd2 -topo_global_server_address=etcd-global-client.default:2379 -topo_global_root=/vitess/global -logtostderr=true -stderrthreshold=0 -port=15001 -grpc_port=15991 -service_map=grpc-vtgateservice -cells_to_watch=zone1 -tablet_types_to_wait=MASTER,REPLICA -gateway_implementation=discoverygateway -cell=zone1
vtgate | 2018-01-30T02:29:46.695275407Z | + eval exec /vt/bin/vtgate -topo_implementation=etcd2 '-topo_global_server_address="etcd-global-client.default:2379"' -topo_global_root=/vitess/global -logtostderr=true -stderrthreshold=0 -port=15001 -grpc_port=15991 '-service_map="grpc-vtgateservice"' '-cells_to_watch="zone1"' '-tablet_types_to_wait="MASTER,REPLICA"' '-gateway_implementation="discoverygateway"' '-cell="zone1"'

Is there any other configuration I should have done?

@derekperkins
Copy link
Member

@snthibaud That node doesn't exist error should eventually resolve itself once the vttablets spin up. Did this not ever succeed? It usually takes about 5-10 minutes to complete on my cluster. Also, if you want to paste your site-values.yaml here, I can take a look at it.

@snthibaud
Copy link
Author

I see. I think the script doesn't bring up any tablets as far as I can see now. I used the example and modified it according to your suggestion:

# This file contains default values for vitess.
#
# You can override these defaults when installing:
#   helm install -f site-values.yaml .
#
# The contents of site-values.yaml will be merged into this default config.
# It's not necessary to copy the defaults into site-values.yaml.
#
# For command-line flag maps like backupFlags or extraFlags,
# use 'flag_name: true|false' to enable or disable a boolean flag.

# The main topology map declares what resources should be created.
# Values for each component (etcd, vtctld, ...) that are not specified here
# will be taken from defaults defined below.
# topology:

# config will be stored as a ConfigMap and mounted where appropriate
config:
  # Backup flags will be applied to components that need them.
  # These are defined globally since all components should agree.
  backup:

    enabled: false

    # choose a backup service - valid values are gcs/s3
    # TODO: add file and ceph support
    # backup_storage_implementation: gcs

    #########
    # gcs settings
    #########

    # Google Cloud Storage bucket to use for backups
    # gcs_backup_storage_bucket: vitess-backups

    # root prefix for all backup-related object names
    # gcs_backup_storage_root: vtbackups

    # secret that contains Google service account json with read/write access to the bucket
    # kubectl create secret generic vitess-backups-creds --from-file=gcp-creds.json
    # can be omitted if running on a GCE/GKE node with default permissions
    # gcsSecret: vitess-gcs-creds

    #########
    # s3 settings
    #########

    # AWS region to use
    # s3_backup_aws_region: "us-east-1"

    # S3 bucket to use for backups
    # s3_backup_storage_bucket: "vitess-backups"

    # root prefix for all backup-related object names
    # s3_backup_storage_root: "vtbackups"

    # server-side encryption algorithm (e.g., AES256, aws:kms)
    # s3_backup_server_side_encryption: "AES256"

    # secret that contains AWS S3 credentials file with read/write access to the bucket
    # kubectl create secret generic s3-credentials --from-file=s3-creds
    # can be omitted if running on a node with default permissions
    # s3Secret: vitess-s3-creds

topology:
  globalCell:
    etcd:
      replicas: 3
  cells:
    - name: "zone1"

      # set failure-domain.beta.kubernetes.io/region
      # region: eastus

      # enable or disable mysql protocol support, with accompanying auth details
      mysqlProtocol:
        enabled: false
        username: bob
        # this is the secret that will be mounted as the user password
        # kubectl create secret generic myuser_password --from-literal=password=abc123
        passwordSecret: alice

      etcd:
        replicas: 3
      vtctld:
        replicas: 1
      vtgate:
        replicas: 3
        # if maxReplicas is higher than replicas, an HPA will be created
        maxReplicas: 3

# Default values for etcd resources defined in 'topology'
etcd:
  version: "3.2.13"
  replicas: 3
  resources:
    limits:
      cpu: 300m
      memory: 200Mi
    requests:
      cpu: 200m
      memory: 100Mi

# Default values for vtctld resources defined in 'topology'
vtctld:
  serviceType: "ClusterIP"
  vitessTag: "latest"
  resources:
    limits:
      cpu: 100m
      memory: 128Mi

# Default values for vtgate resources defined in 'topology'
vtgate:
  serviceType: "ClusterIP"
  vitessTag: "latest"
  resources:
    limits:
      cpu: 500m
      memory: 512Mi

# Default values for vttablet resources defined in 'topology'
vttablet:
  vitessTag: "latest"

  # valid values are mysql, maria, or percona
  # the flavor determines the base my.cnf file for vitess to function
  flavor: "mysql"

  # mysqlImage: "percona:5.7.20"
  mysqlImage: "mysql:5.7.20"
  # mysqlImage: "mariadb:10.3.4"
  resources:
    # common production values 2-4CPU/4-8Gi RAM
    limits:
      cpu: 500m
      memory: 1Gi
  mysqlResources:
    # common production values 4CPU/8-16Gi RAM
    limits:
      cpu: 500m
      memory: 1Gi
  # PVC for mysql
  dataVolumeClaimAnnotations:
  dataVolumeClaimSpec:
    # pd-ssd (Google Cloud)
    # managed-premium (Azure)
    # standard (AWS) - not sure what the default class is for ssd
    # Note: Leave storageClassName unset to use cluster-specific default class.
    #storageClassName: "pd-ssd"
    accessModes: ["ReadWriteOnce"]
    resources:
      requests:
        storage: "10Gi"

@derekperkins
Copy link
Member

Ok, I see your problem now. The base values.yaml file is there for default options, but it can't describe everything, since we don't know what your schema looks like. Helm then uses a site-values.yaml that you provide and merges that config with these defaults to describe the final chart. https://docs.helm.sh/helm/#helm-install

You'll want to take something like the following data and create a new site-values.yaml file. Then you'll run:

helm install /path/to/github.com/youtube/vitess/helm/vitess/ -f /path/to/site-values.yaml --name=vitess --namespace vitess

site-values.yaml

topology:
  cells:
    - name: "zone1"
      etcd:
        replicas: 3
      vtctld:
        replicas: 1
      vtgate:
        replicas: 3
        maxReplicas: 3
      keyspaces:
        - name: "unsharded-dbname"
          shards:
            - name: "0"
              tablets:
                - type: "replica"
                  vttablet:
                    replicas: 2

@snthibaud
Copy link
Author

Thank you. I succesfully created a cluster with shards. Both shards showed replication_reporter: no slave status, however. Should one of these shards become the shard master? I tried to choose one with ./kvtctl.sh InitShardMaster -force sharded-db/-80 zone1-1104301100、but it seems to hang on that command. I executed it on both shards and had to kill it with ctrl-c, because it did not finish. They both show as master now, but with health error Unknown database 'vt_sharded-db' (errno 1049). I feel like there should probably be only one master.

@derekperkins
Copy link
Member

You can try it via the UI (there are actually two separate UIs, each with different functionality). After your helm install completes, it will show you a url that looks similar to this that you'll be able to reach via kubectl proxy.

http://localhost:8001/api/v1/proxy/namespaces/vitess/services/vtctld:web/app/
http://localhost:8001/api/v1/proxy/namespaces/vitess/services/vtctld:web/app2/

app2 is the one you'll want to use first, and it will looks something like this

image

Click into the shard and you'll see a table like this (except both will be replicas)

image

Choose Initialize Shard Master from the dropdown menu at the top

image

Finally pick one of the tablets (doesn't matter which one) and click the Force checkbox

image

@snthibaud
Copy link
Author

@derekperkins Thanks. If I do that, the UI keeps hanging on the window below:
Uploading Screen Shot 2018-01-31 at 9.42.47.png…
Another question: if I want to distribute the data over both shards without any duplication of the data, should I make both shards (80- and -80) the master, right?

@derekperkins
Copy link
Member

Your screenshot didn't upload, so I can't see it. I'm guessing it's just the spinner?

Yes, each shard runs as a separate "cluster", so you'll need a master per shard.

@snthibaud
Copy link
Author

Yes, just a spinner indeed

@snthibaud
Copy link
Author

It says 'Loading response...'

@derekperkins
Copy link
Member

Have you tried deleting the entire Vitess install, including the PVCs, and reinstalling the helm chart?

helm del --purge vitess

@snthibaud
Copy link
Author

Yes, I deleted the entire cluster just before I tried the suggestion to shard in the GUI.
Then I installed etcd-operator, did 'helm init' and installed the chart.

@derekperkins
Copy link
Member

What do your logs from vtctld and vttablet say?

@derekperkins
Copy link
Member

BTW, I just invited you to the Vitess Slack group. It might be easier to debug in a chat context

@snthibaud
Copy link
Author

Thank you!

@sougou
Copy link
Contributor

sougou commented Feb 7, 2018

Looks like this thread continued on Slack.

@sougou sougou closed this as completed Feb 7, 2018
frouioui pushed a commit to planetscale/vitess that referenced this issue Mar 26, 2024
…t selection: pick non-serving tablets in Reshard workflows (vitessio#3603)

* backport of 3561

* Adjust for v18

Signed-off-by: Matt Lord <mattalord@gmail.com>

---------

Signed-off-by: Matt Lord <mattalord@gmail.com>
Co-authored-by: Matt Lord <mattalord@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants