-
Notifications
You must be signed in to change notification settings - Fork 2
Before Start Your LA Installation
You have to choose which domain and/or subdomain will use your LA platform, like https://my-living-atlas.org
, etc, and how will look like your URLs. This depends on if you want to use subdomains:
https://collections.my-living-atlas.org
https://records.my-living-atlas.org
https://species.my-living-atlas.org
- etc
or /paths
like:
https://my-living-atlas.org/collections
https://my-living-atlas.org/records
https://my-living-atlas.org/species
- etc
To take into account when configuring each module.
We have some test domain (https://l-a.site
) to testing, demos, etc. If you need to use it for your demo, just ask. The idea is to use a small subdomain for each national portal demo (https://es.l-a.site
, https://tz.l-a.site
).
See our FAQ for more information and current issues.
Configure your DNS for your domain and subdomains and your new servers IPS or:
If for some reason, you are not yet using your definitive domain, or the DNS it's not configured well, or you have internal IP address (and the public IP addresses are under a proxy or firewall) or you want to use some real production DNS but in a test machine, you can configure your Host file instead. For instance, add entries in /etc/hosts
on your computer for your servers:
123.123.123.123 l-a.site
123.123.123.124 spatial.l-a.site
123.123.123.125 collectory.l-a.site
123.123.123.126 data.canadensys.net
You should add this entries also in your/etc/hosts
of your virtual servers so they can see each other, demo
resolves spatial
and vice-versa.
You can use this host ansible role for this task.
Some command line skills are required if you want to deploy and admin a LA node succesfully.
You can have a look to the first lessons of this Google course (free)
about how to use a terminal console and basic file/directories command, and also this lesson about ssh
and the next ssh
lessons of this course.
Do you have problems using ssh
, scp
or sftp
, give a try to this graphical ssh/sftp client.
You need some user that ansible
will use to access to your server, by default ubuntu
. It's recommended to use some passwordless ssh
strategy (ssh
keys without passwords or better using ssh-agent
).
This user 'ubuntu' should have sudo
passwordless permissions in your servers. Something like:
# cat /etc/sudoers.d/90-ubuntu
# User rules for ubuntu
ubuntu ALL=(ALL) NOPASSWD:ALL
You can check your access from your launch server with something like:
ssh ubuntu@yourServer sudo ls /etc
You can configure ~/.ssh/config
to easy access to your server and subdomains. An example from some "l-a.site" domain:
Host l-a.site bie.l-a.site biocache-ws.l-a.site biocache.l-a.site collectory.l-a.site images.l-a.sitei lists.l-a.site logger.l-a.site regions.l-a.site index.l-a.site
IdentityFile ~/.ssh/some_key
user ubuntu
Host spatial.l-a.site auth.l-a.site
IdentityFile ~/.ssh/some_key
user ubuntu
Test servers connectivity and local names resolution.
If you using real DNS domains and/or subdomains check that your VMs has correct hostnames, and can are reachable between them.
If not, you will have to configure your /etc/hosts in your local machine, and in your test VMs with the test hostname and address, like:
123.123.123.123 tanbif.or.tz collectory.tanbif.or.tz records.tanbif.or.tz biocache.tanbif.or.tz images.tanbif.or.tz lists.tanbif.or.tz regions.tanbif.or.tz spatial.tanbif.or.tz auth.tanbif.or.tz index.tanbif.or.tz
Note: this is only a sample, adjust this for your site, services and urls.
If you want to use SSL in your site (recommended) request or generate via letsencrypt your certificates before deploying your services so they start correctly.
Edit your future solr server and set in /etc/security/limits.conf
some correct limits like:
solr hard nofile 65535
solr soft nofile 65535
solr hard nproc 65535
solr soft nproc 65535
More info in Taking Solr to Production.
Also if you are deploying a production site, take into account these other limits and production tips for cassandra.
Cassandra deb package configure this via /etc/security/limits.d/cassandra.conf
.
The NGINX webserver configuration is implemented by compiling fragments together on the server and then combining them into a single configuration file for each DNS hostname.
The nginx_vhost_fragments_to_clear
ansible variable is a list that should contain all of the hostnames that need their nginx configurations refreshed when ansible is run each time.
For example:
nginx_vhost_fragments_to_clear=["service.l-a.site"]
If you are not able to set it because you are deploying multiple apps to the same nginx vhost, you will need to specify the override variable to use the ansible scripts in ala-install
:
nginx_vhost_fragments_to_clear_no_warning=True
If you override the warning you will also understand that you are likely going to need to clear the /etc/nginx/vhost_fragments
directory manually if and when you see errors in the nginx configuration that are caused by old fragments.
Index
- Wiki home
- Community
- Getting Started
- Support
- Portals in production
- ALA modules
- Demonstration portal
- Data management in ALA Architecture
- DataHub
- Customization
- Internationalization (i18n)
- Administration system
- Contribution to main project
- Study case