-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Verify that demo database is in sync for production database. #96
Comments
Received. Not able to assign myself... should I be a member of the org? |
You are a member of the repo now. |
Characterizing the issue:
The existing crawler inserts with geometry in the |
This has implications for the porting of the crawler to python. I'm hoping to replicate the function I see in the java port (where |
OK... so I'm jammed up here on how the docker images are used to create the demo database. Looks like the database is just restored from a snapshot taken at some point with the feature table populated. That's the part to re-do. So -- if this repo will be transitioning to another technology (i.e. python + sqlalchemy + alembic) to automate database creation, we can just keep track of this detail for that work. But if we're sticking with LiquiBase et al, then I will need to figure out how to push this change into the existing framework. |
This is probably a topic to bring up with others working on USGS databases. I think LiquiBase is probably what we are sticking with. Can you just fixup and reup the snapshot for now and we take up automating the demo database at a later time? |
Short answer is yes... can create a snapshot to use in place of the current. The Dockerfile specifies location of
We can either create a new release (1.0.1, say) or overwrite the existing release. I am not finding any documentation in the repo from Ethan as to the workflow for these artifacts. Will need to do some digging -- so while I think the final answer is going to be easy, it likely won't be fast. |
Creating the dump is easy enough:
... assuming that the demo database is spruced up the way we want it. And then creating release and artifacts that don't break anything else. |
@gzt5142 -- I think I am going to take care of this in #100 My plan is to dump the tables that need to be loaded for the demo database and load them using the same mechanism as you could for the production database. As part of that, I am thinking I'll create one of the dump files that contains several tables so on in demo we can load a number of demo feature sources and on prod we can load a different set. Not quite sure how that's going to work yet, but it seems like the right concept. Any thoughts? |
Agree concpetually. . . a
The NHD pieces are needed to ensure that ingested features will be spatially matched to a comid from NHD. With a subset, we risk not getting a match for newly ingested features. |
Right -- so the dump that contains several tables could accommodate the variation in requirements you have in your dot points. I'll at least get this roughed in as a pattern so we can iterate on the details. |
Should be good with #112 |
The demo database seems to be out of sync with production. We need to verify that the demo database is in sync and operates the same.
The text was updated successfully, but these errors were encountered: