Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kingfisher Process Docker app deploy #418

Merged
merged 86 commits into from
Jun 14, 2023
Merged

Conversation

jpmckinney
Copy link
Member

@jpmckinney jpmckinney commented Mar 25, 2023

closes #200, addresses #402, closes #427

Main:

  • Remove configuration for Kingfisher Process v1
  • @yolile will delete some of the DB users
    • If any others are removed from Pillar data after June 9, remember to run states to delete the SQL users
  • Copy things over for RBC Qlik BI: the ocdskingfishercollect DB (to be renamed kingfisher_collect) and /home/collect/data directory (to be moved to /home/incremental/data). We would need to comment-out the cron jobs until those two have been moved.
  • Once the kingfisher_collect database and /home/incremental/data directory are populated, change cron.absent to cron.present in incremental.sls
  • @RobHooper to add network configuration, review bullets below, etc.
  • Decide on set -euo pipefail for firewall_reset.sh Kingfisher Process Docker app deploy #418 (comment)

Notes:

  • I haven't checked pgbackrest at all, e.g. postgres.backup in Pillar, salt/postgres/files/pgbackrest, etc. Do we need to change the pgbackrest configuration (kingfishser_common.sls) to avoid conflict with the old DB backups?
  • I think we'll want to remove ocp04 from Salt, but not sure what we'll do about ocp05 (current replica), since we want to keep it up for up to a year at most, in case we want to access that database.
  • I haven't checked replication at all.

  • Add these steps to create_server.rst under "5. Migrate from the old server" once clarified whether needed

Once main server is online:

  • Check uid of docker user patches uid in pillar/kingfisher_process.sls
  • Update tinyproxy IPs in pillar/tinyproxy.sls
  • Re-enable backups (restore configuration commented out in 9fca6f3 and 1332186 and in pillar/private/kingfisher_common.sls)
  • Update firewall IPs in pillar/kingfisher_replica.sls

Once replica server is online:

  • Update replica_ipv4 (and replica_ipv6) in pillar/kingfisher_main.sls

One either server is online:

@jpmckinney jpmckinney mentioned this pull request May 25, 2023
@jpmckinney jpmckinney marked this pull request as ready for review May 27, 2023 04:53
Rename:
- ocdskfs: collect (to match registry, in order to avoid confusion)
- ocdskfp: summarize
- collect: incremental
- ocdskingfishercollect: kingfisher-collect (directory) or kingfisher_collect (database)
- ocdskingfisherprocess: kingfisher_process
- ocdskingfisherscrape: kingfisher-collect
- ocdskingfisherviews: kingfisher-summarize
- OCDS_KINGFISHER_SCRAPE_*: OCDS_KINGFISHER_COLLECT_*

Salt/Pillar:
- Move data support-related commands, Python packages, SQL extensions and reference schema to kingfisher/init.sls
- Create individual users, and remove access to general user #142
- Create .pgpass files for the individual users
- collect: Give deployer user access to the FILES_STORE directory
- process: Set ENABLE_CHECKER
- summarize: Change .env to world-readable (contains no secrets)
- summarize: Set KINGFISHER_SUMMARIZE_LOGGING_JSON in .env
- summarize: summary_view_1_2_research schema was deleted
- Use contents key for .pgpass files

Docs:
- Remove Kingfisher Process v1-specific documentation
@jpmckinney
Copy link
Member Author

@RobHooper I've made the changes I wanted to make. There is a corresponding PR for private Pillar.

@jpmckinney
Copy link
Member Author

jpmckinney commented May 27, 2023

If you upgrade to Salt 3006 for this work, please check whether {% include 'postgres/files/conf/shared.include' %} still works (I opened #432 for a possibly related issue).

@ghost
Copy link

ghost commented Jun 6, 2023

what [to] do about ocp05 (current replica), since we want to keep it up for up to a year at most, in case we want to access that database.

To help reduce the risks with Ubuntu end of life we can install Ubuntu Pro on ocp05.
This would provide further security updates for most packages, in particularly core system software.
The only software this would not cover is Postgres which depends on their custom apt repo support.

https://ubuntu.com/pro
https://wiki.postgresql.org/wiki/Apt

This would be best for if the plan is for ocp05 to be an online "archived" database.

Alternatively, we can turn off the server but keep it available, then we are one server boot away from accessing the data.
This way we won't need a solution like Ubuntu Pro.

pgbackrest
We are best creating a new stanza in pgbackrest to avoid clashing with the old backups.
Especially since we are upgrading Postgres versions.

I can create the new stanza and update everywhere. kingfisher-2023 or kingfisherv15 (version number matching Postgres) perhaps, let me know if you have a preference.

@jpmckinney
Copy link
Member Author

For Ubuntu Pro, the only server option seems to be "Physical servers with unlimited VMs" (we have no VMs).

Would we need to go with that one, or is "Desktop" the option to take?

I'm happy with either kingfisher-2023 or kingfisherv15. We can go with kingfisher-2023 :)

@ghost
Copy link

ghost commented Jun 6, 2023

The "Ubuntu Pro Server with unlimited VMs" option is indeed the best option for us because it is a physical server.
We need Ubuntu Universe repository coverage included because we are using a number of packages installed from there.

It should cost $500 for the year.

@jpmckinney
Copy link
Member Author

Okay, I'll get that approved, but let's go ahead with that as the plan.

@ghost
Copy link

ghost commented Jun 13, 2023

Happy being led by you if we bring the replica online now or later.

We will loose the failover / disaster recovery that the replica server currently provides.
We have database backups but it is a slower process to recover with them.
The replica server is also significantly quicker to failover to in the case of hardware failures (i.e. the worst-case scenario).

I am also conscious that the new server is smaller so this is another factor to consider around load.

If we are not starting the replica now I will configure pgbackrest before this is merged.

@jpmckinney
Copy link
Member Author

Let's start without the replica.

# Daily incremental backup
15 05 * * 0-2,4-6 postgres pgbackrest backup --stanza=kingfisher-2023
# Weekly full backup
15 05 * * 3 postgres pgbackrest backup --stanza=kingfisher-2023 --type=full 2>&1 | grep -v "unable to remove file.*We encountered an internal error\. Please try again\.\|expire command encountered 1 error.s., check the log file for details"
Copy link
Member Author

@jpmckinney jpmckinney Jun 14, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added some comments to the file to help check whether this grep still matches text.

I think the 1 needs to be changed to \d+ (in which case grep needs a -E option)

I couldn't find what outputs We encountered an internal error\. Please try again\.

Is the \| meant to match a literal pipe character? (I'm not sure how pgbackrest concatenates messages).

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

\| is an OR operator for grep, we are using this to ignore multiple lines we have seen are false positives.

Checking our records, "expire command encountered" has only ever reported "1" error.
I would like to know when multiple files fail so \d+ would be too greedy from this point of view.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aha, sounds good!

@jpmckinney
Copy link
Member Author

@RobHooper This PR is good to go. I added one comment, but it can be looked into separately.

@jpmckinney jpmckinney merged commit 5d4c501 into main Jun 14, 2023
@jpmckinney jpmckinney deleted the kingfisher-process-deploy branch June 14, 2023 15:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Update create server process to not add port knocking if Docker to be added later Set up Pelican server
1 participant