Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error: pq: password authentication failed for user "rpuser #179

Closed
patsevanton opened this issue Jul 5, 2021 · 1 comment
Closed

error: pq: password authentication failed for user "rpuser #179

patsevanton opened this issue Jul 5, 2021 · 1 comment

Comments

@patsevanton
Copy link
Contributor

patsevanton commented Jul 5, 2021

Hello! I try install reportportal by instruction

helm repo add stable https://charts.helm.sh/stable
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add elastic https://helm.elastic.co
helm repo update

Create file value-elastic.yaml with content:

extraEnvs:
  - name: discovery.type
    value: single-node
  - name: cluster.initial_master_nodes
    value: ""

File value.yaml

## String to partially override reportportal.fullname template (will maintain the release name)
##
# nameOverride:

## String to fully override reportportal.fullname template
##
# fullnameOverride:

serviceindex:
  name: index
  repository: reportportal/service-index
  tag: 5.0.10
  pullPolicy: IfNotPresent
  resources:
    requests:
      cpu: 150m
      memory: 128Mi
    limits:
      cpu: 200m
      memory: 256Mi
  podAnnotations: {}
  securityContext: {}

uat:
  repository: reportportal/service-authorization
  name: uat
  tag: 5.4.0
  pullPolicy: IfNotPresent
  resources:
    requests:
      cpu: 100m
      memory: 512Mi
    limits:
      cpu: 500m
      memory: 2048Mi
  sessionLiveTime: 86400
  podAnnotations: {}
  jvmArgs: "-Djava.security.egd=file:/dev/./urandom -XX:MinRAMPercentage=60.0 -XX:MaxRAMPercentage=90.0"
  securityContext: {}
  serviceAccountName: ""

serviceui:
  repository: reportportal/service-ui
  tag: v5.4.0
  name: ui
  pullPolicy: IfNotPresent
  resources:
    requests:
      cpu: 100m
      memory: 64Mi
    limits:
      cpu: 200m
      memory: 128Mi
  podAnnotations: {}
  securityContext: {}
  serviceAccountName: ""

serviceapi:
  repository: reportportal/service-api
  tag: 5.4.0
  name: api
  pullPolicy: IfNotPresent
  replicaCount: 1
  readinessProbe:
    initialDelaySeconds: 30
    periodSeconds: 20
    timeoutSeconds: 3
    failureThreshold: 20
  resources:
    requests:
      cpu: 500m
      memory: 1024Mi
    limits:
      cpu: 1000m
      memory: 2048Mi
  jvmArgs: "-Djava.security.egd=file:/dev/./urandom -XX:+UseG1GC -XX:MinRAMPercentage=60.0 -XX:InitiatingHeapOccupancyPercent=70 -XX:MaxRAMPercentage=90.0 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp"
  queues:
    totalNumber: 
    perPodNumber: 
  podAnnotations: {}
  securityContext: {}
  serviceAccountName: ""

servicejobs:
  repository: reportportal/service-jobs
  tag: 5.4.0
  name: jobs
  pullPolicy: IfNotPresent
  clean:
    chunksize: 1000
    cron:
      attachment: 0 0 */24 * * *
      log: 0 0 */24 * * *
      launch: 0 0 */24 * * *
      storage: 0 0 */24 * * *
      storageproject: 0 */5 * * * *
  resources:
    requests:
      cpu: 100m
      memory: 248Mi
    limits:
      cpu: 100m
      memory: 372Mi
  podAnnotations: {}
  securityContext: {}
  serviceAccountName: ""

migrations:
  repository: reportportal/migrations
  tag: 5.4.0
  pullPolicy: IfNotPresent
  resources:
    requests:
      cpu: 100m
      memory: 64Mi
    limits:
      cpu: 100m
      memory: 128Mi
  podAnnotations: {}
  securityContext: {}
  serviceAccountName: ""
  metadataAnnotations:
    enabled: true
    hooks:
      "helm.sh/hook": "pre-install,pre-upgrade"
      "helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded"

serviceanalyzer:
  repository: reportportal/service-auto-analyzer
  tag: 5.4.0
  name: analyzer
  pullPolicy: IfNotPresent
  resources:
    requests:
      cpu: 100m
      memory: 256Mi
    limits:
      cpu: 100m
      memory: 512Mi
  podAnnotations: {}
  securityContext: {}
  serviceAccountName: ""

serviceanalyzertrain:
  resources:
    requests:
      cpu: 100m
      memory: 256Mi
    limits:
      cpu: 200m
      memory: 512Mi
  podAnnotations: {}
  securityContext: {}
  serviceAccountName: ""

rabbitmq:
  SecretName: "rabbitmq"
  installdep:
    enable: false
  endpoint:
    cloudservice: false
    address: rabbitmq.default.svc.cluster.local
    port: 5672
    user: rabbitmq
    apiport: 15672
    apiuser: rabbitmq
    password: password

postgresql:
  SecretName: "postgresql"
  installdep:
    enable: false
  endpoint:
    cloudservice: false
    address: postgresql.default.svc.cluster.local
    port: 5432
    user: rpuser
    dbName: reportportal
    password: password

elasticsearch:
  installdep:
    enable: false
  endpoint: http://elasticsearch-master:9200

minio:
  secretName: "minio"
  enabled: true
  installdep:
    enable: false
  endpoint: http://minio.default.svc.cluster.local:9000
  endpointshort: minio.default.svc.cluster.local:9000
  region:
  accesskey: 518NJ1Pa4Q
  secretkey: QpgDErQoY3sd06EYVgBZrmegtLjsQyvFiU8tcwSV

# Ingress configuration for the ui
# If you have installed ingress controller and want to expose application - set INGRESS.ENABLE to true.
# If you have some domain name set INGRESS.USEDOMAINNAME variable to true and set this fqdn to INGRESS.HOSTS
# If you don't have any domain names - set INGRESS.USEDOMAINNAME to false
ingress:
  enable: true
  # IF YOU HAVE SOME DOMAIN NAME SET INGRESS.USEDOMAINNAME to true
  usedomainname: false
  hosts:
    - reportportal.k8.com
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    nginx.ingress.kubernetes.io/rewrite-target: /$2
    nginx.ingress.kubernetes.io/x-forwarded-prefix: /$1
    nginx.ingress.kubernetes.io/proxy-body-size: 128m
    nginx.ingress.kubernetes.io/proxy-buffer-size: 512k
    nginx.ingress.kubernetes.io/proxy-buffers-number: "4"
    nginx.ingress.kubernetes.io/proxy-busy-buffers-size: 512k
    nginx.ingress.kubernetes.io/proxy-connect-timeout: "8000"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "4000"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "4000"

# node selector for all components, if any
nodeSelector:
  enabled: false
  selector:
    reportportal: true


# RBAC is required for service-index in order to collect status/info over all services
rbac:
  create: true
  serviceAccount:
      create: true
      name: reportportal

rp:
  infoEndpoint: "/info"
  healthEndpoint: "/health"


Install

helm install elasticsearch elastic/elasticsearch --set replicas=1 -f value-elastic.yaml

NAME: elasticsearch
LAST DEPLOYED: Mon Jul  5 15:23:42 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Watch all cluster members come up.
  $ kubectl get pods --namespace=default -l app=elasticsearch-master -w
2. Test cluster health using Helm test.
  $ helm test elasticsearch

helm install minio bitnami/minio

NAME: minio
LAST DEPLOYED: Mon Jul  5 15:23:45 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
** Please be patient while the chart is being deployed **

MinIO(R) can be accessed via port 9000 on the following DNS name from within your cluster:

   minio.default.svc.cluster.local

To get your credentials run:

   export ACCESS_KEY=$(kubectl get secret --namespace default minio -o jsonpath="{.data.access-key}" | base64 --decode)
   export SECRET_KEY=$(kubectl get secret --namespace default minio -o jsonpath="{.data.secret-key}" | base64 --decode)

To connect to your MinIO(R) server using a client:

- Run a MinIO(R) Client pod and append the desired command (e.g. 'admin info'):

   kubectl run --namespace default minio-client \
     --rm --tty -i --restart='Never' \
     --env MINIO_SERVER_ACCESS_KEY=$ACCESS_KEY \
     --env MINIO_SERVER_SECRET_KEY=$SECRET_KEY \
     --env MINIO_SERVER_HOST=minio \
     --image docker.io/bitnami/minio-client:2021.6.13-debian-10-r3 -- admin info minio

To access the MinIO(R) web UI:

- Get the MinIO(R) URL:

   echo "MinIO(R) web URL: http://127.0.0.1:9000/minio"
   kubectl port-forward --namespace default svc/minio 9000:9000

helm install postgresql stable/postgresql

WARNING: This chart is deprecated
NAME: postgresql
LAST DEPLOYED: Mon Jul  5 15:23:49 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
This Helm chart is deprecated

Given the `stable` deprecation timeline (https://github.com/helm/charts#deprecation-timeline), the Bitnami maintained Helm chart is now located at bitnami/charts (https://github.com/bitnami/charts/).

The Bitnami repository is already included in the Hubs and we will continue providing the same cadence of updates, support, etc that we've been keeping here these years. Installation instructions are very similar, just adding the _bitnami_ repo and using it during the installation (`bitnami/<chart>` instead of `stable/<chart>`)

bash
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm install my-release bitnami/<chart>           # Helm 3
$ helm install --name my-release bitnami/<chart>    # Helm 2


To update an exisiting _stable_ deployment with a chart hosted in the bitnami repository you can execute

bash
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm upgrade my-release bitnami/<chart>


Issues and PRs related to the chart itself will be redirected to `bitnami/charts` GitHub repository. In the same way, we'll be happy to answer questions related to this migration process in this issue (https://github.com/helm/charts/issues/20969) created as a common place for discussion.

** Please be patient while the chart is being deployed **

PostgreSQL can be accessed via port 5432 on the following DNS name from within your cluster:

    postgresql.default.svc.cluster.local - Read/Write connection

To get the password for "postgres" run:

    export POSTGRES_PASSWORD=$(kubectl get secret --namespace default postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)

To connect to your database run the following command:

    kubectl run postgresql-client --rm --tty -i --restart='Never' --namespace default --image docker.io/bitnami/postgresql:11.7.0-debian-10-r9 --env="PGPASSWORD=$POSTGRES_PASSWORD" --command -- psql --host postgresql -U postgres -d postgres -p 5432


To connect to your database from outside the cluster execute the following commands:

    kubectl port-forward --namespace default svc/postgresql 5432:5432 &
    PGPASSWORD="$POSTGRES_PASSWORD" psql --host 127.0.0.1 -U postgres -d postgres -p 5432

helm install rabbitmq bitnami/rabbitmq

NAME: rabbitmq
LAST DEPLOYED: Mon Jul  5 15:23:52 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
** Please be patient while the chart is being deployed **

Credentials:
    echo "Username      : user"
    echo "Password      : $(kubectl get secret --namespace default rabbitmq -o jsonpath="{.data.rabbitmq-password}" | base64 --decode)"
    echo "ErLang Cookie : $(kubectl get secret --namespace default rabbitmq -o jsonpath="{.data.rabbitmq-erlang-cookie}" | base64 --decode)"

Note that the credentials are saved in persistent volume claims and will not be changed upon upgrade or reinstallation unless the persistent volume claim has been deleted. If this is not the first installation of this chart, the credentials may not be valid.
This is applicable when no passwords are set and therefore the random password is autogenerated. In case of using a fixed password, you should specify it when upgrading.
More information about the credentials may be found at https://docs.bitnami.com/general/how-to/troubleshoot-helm-chart-issues/#credential-errors-while-upgrading-chart-releases.

RabbitMQ can be accessed within the cluster on port  at rabbitmq.default.svc.

To access for outside the cluster, perform the following steps:

To Access the RabbitMQ AMQP port:

    echo "URL : amqp://127.0.0.1:5672/"
    kubectl port-forward --namespace default svc/rabbitmq 5672:5672

To Access the RabbitMQ Management interface:

    echo "URL : http://127.0.0.1:15672/"
    kubectl port-forward --namespace default svc/rabbitmq 15672:15672

Get ACCESS_KEY and SECRET_KEY

kubectl get secret --namespace default minio -o jsonpath="{.data.access-key}" | base64 --decode
kubectl get secret --namespace default minio -o jsonpath="{.data.secret-key}" | base64 --decode

kubectl get secret

NAME                                  TYPE                                  DATA   AGE
default-token-gzkt2                   kubernetes.io/service-account-token   3      4d5h
minio                                 Opaque                                3      29m
minio-token-j78sv                     kubernetes.io/service-account-token   3      29m
postgresql                            Opaque                                1      29m
rabbitmq                              Opaque                                2      29m
rabbitmq-token-q857f                  kubernetes.io/service-account-token   3      29m
sh.helm.release.v1.elasticsearch.v1   helm.sh/release.v1                    1      29m
sh.helm.release.v1.minio.v1           helm.sh/release.v1                    1      29m
sh.helm.release.v1.postgresql.v1      helm.sh/release.v1                    1      29m
sh.helm.release.v1.rabbitmq.v1        helm.sh/release.v1                    1      29m

kubectl get all

NAME                                READY   STATUS    RESTARTS   AGE
pod/elasticsearch-master-0          1/1     Running   0          30m
pod/minio-6cbdf9bb5c-2tzrw          1/1     Running   0          30m
pod/postgresql-postgresql-0         1/1     Running   0          30m
pod/rabbitmq-0                      1/1     Running   0          29m
pod/reportportal-migrations-4vmxm   0/1     Error     0          14s
pod/reportportal-migrations-5swrp   0/1     Error     0          18s
pod/reportportal-migrations-vj89c   0/1     Error     0          4s

kubectl logs pod/reportportal-migrations-4vmxm

wait-for-it.sh: waiting 15 seconds for postgresql.default.svc.cluster.local:5432
wait-for-it.sh: postgresql.default.svc.cluster.local:5432 is available after 0 seconds
error: pq: password authentication failed for user "rpuser"
@hlebkanonik
Copy link
Contributor

Hello @patsevanton,

Migration can't find a password, by default password is taken from Kubernetes Secrets, value postgresql.SecretName. If you use a password in the values edit postgresql.endpoint.cloudservice to true

postgresql:
  endpoint:
    cloudservice: true

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants