Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update to SOLR 7 #240

Closed
timrobertson100 opened this issue Feb 27, 2018 · 10 comments
Closed

Update to SOLR 7 #240

timrobertson100 opened this issue Feb 27, 2018 · 10 comments

Comments

@timrobertson100
Copy link
Contributor

SOLR 7 is out, and should be a relatively easy migration.

@ansell
Copy link
Contributor

ansell commented Apr 27, 2018

@mskyttner
Copy link

We needed to do two things when migrating to from Solr 6 to Solr 7.3 on Alpine - we upgraded the JTS spatial lib to jts-core-1.15.0.jar and we also change the solr configs. After this it (so far) seems to run like before.

This is what we did in more detail:

Upgrade spatial lib

For the spatial lib, we had to use a specific version of the JTS lib: /opt/solr/server/solr-webapp/webapp/WEB-INF/lib/jts-core-1.15.0.jar and we had to change any geohash fieldtype solr configs to use spatialContextFactory="JTS" instead of spatialContextFactory="com.spatial4j.core.context.jts.JtsSpatialContextFactory"

Config changes in solr.xml and solrconfig.xml

We had to remove any strings from schema.xml that looks like this:

<defaultSearchField>text</defaultSearchField>
<solrQueryParser defaultOperator="OR"/>

Instead put those configs in a requestHandler section in solrconfig.xml:

<requestHandler name="/select" class="solr.SearchHandler">
    <!-- default values for query parameters can be specified, these
         will be overridden by parameters in the request
      -->
    <lst name="defaults">
        <str name="echoParams">explicit</str>
        <int name="rows">10</int>
        <str name="df">text</str>
        <str name="q.op">OR</str>
    </lst>
</requestHandler>

Same for AND operators....

Details here:

bioatlas/ala-docker@6599b00#diff-71151dac089a42aaa2630c681b677488
bioatlas/ala-docker@6599b00#diff-98c95e7efb9644f5c962d1bc0db22a45

Full details:

bioatlas/ala-docker@6599b00#diff-71151dac089a42aaa2630c681b677488

@djtfmartin
Copy link
Member

Thanks @mskyttner - this is great

@ansell
Copy link
Contributor

ansell commented Jan 23, 2019

The continued use of Solr-6 has created a security incident for the ALA and possibly other atlases that haven't picked up the Solr-7 patches. This should not have been rated as "Priority-low" and "Enhancement" given the flaw was known and intentionally ignored.

@timrobertson100
Copy link
Contributor Author

Can you elaborate on the security incident please @ansell (here or in private comms)?

This should not have been rated as "Priority-low" and "Enhancement" given the flaw was known and intentionally ignored.

When those tags were applied it was reasonable to do so (Solr 6 was still actively released) and it is the case that Solr 6 is currently designated as LTS so in theory, but perhaps not in practice one should be ok running on 6 still. It might be worth reporting on the user@ solr list given it is declared LTS.

@djtfmartin
Copy link
Member

thanks @timrobertson100

This is from an old email nearer the time:

> Just a bit of investigation on the SOLR security issue, I think it was fixed for 6.6.x
> http://archive.apache.org/dist/lucene/solr/6.6.3/changes/Changes.html#v6.6.2.bug_fixes
> Assuming this is the issue of concern:
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201710.mbox/%3CCAJEmKoC%2BeQdP-E6BKBVDaR_43fRs1A-hOLO3JYuemmUcr1R%2BTA%40mail.gmail.com%3E​
> ​The last release of 6.6.x was on 17th May (6.6.4), so its relatively current.

I think at this stage we decided to lower the priority because it was deemed to be fixed in 6.6.4.

I think the only thing holding us back from using SOLR 7 is the actual task of the upgrading the cluster. I've been testing locally with SOLR 7 for some time with no issues.

@ansell
Copy link
Contributor

ansell commented Jan 24, 2019

@timrobertson100 The definition of "LTS" that Solr use is different to the way it is used in other projects:

http://lucene.472066.n3.nabble.com/Solr-LTS-and-EOL-td4403843.html

Problems in 6.x must be problems in the current 6.x release, currently
6.6.5, and they must be either MAJOR bugs with no workaround, or a
problem that is extremely trivial to fix -- a patch that is very
unlikely to introduce NEW bugs. If a new 6.x version is released, it
will be a new point release on the last minor version -- 6.6.x.

When 8.0 gets released, 6.x is dead and the latest minor release branch
for 7.x goes to maintenance mode. There is no specific date planned for
any release. A release is made when one of the committers decides it's
time and volunteers to be the release manager.

Solr-6 has been in stasis since Solr-7 was released, and a bug would need a certain level of impact to be fixed in it, and as soon as 8 is released no bugs will be fixed for it. At that point, Solr-7 will become the version that only gets major bug fixes

The Solr-6 bug in this case allowed a solr configuration file to be written, which has been fixed in Solr-7, but has to be explicitly disabled in Solr-6: AtlasOfLivingAustralia/ala-install#231

@djtfmartin
Copy link
Member

I'm going to close this issue and create an issue in ala-infrastructure repo to upgrade our SOLR cluster to 7.
I am not aware of any issues that mean the current code based won't work with SOLR 7 having tested locally with SOLR 7.3.1. We need to set up in a test environment and then upgrade production.

@vjrj
Copy link
Contributor

vjrj commented Mar 28, 2019

@ansell
Copy link
Contributor

ansell commented Mar 28, 2019

If you have recently run the solr-6/solr-7/solrcloud/tomcat ansible roles you should have picked up the workarounds for this which include:

The ALA will be on Solr-6 until there are resources available to setup a test cluster, but others should be able to use Solr-7 as it has been tested by a few other organisations already.

In addition to those, the ALA have a private ufw firewall ansible role that we add to servers to add another level of workaround. That role hasn't been added to ala-install because of a policy of not adding server management roles where others may want to do them differently.

It is basically the following if you wanted to reuse it:

- name: install ufw
  apt: name=ufw state=latest
  when: ansible_os_family == "Debian" and ufw_enabled | bool == True

# UFW will keep rules from previous cases unless we reset it first
- name: Reset UFW
  ufw:
    state: reset
  when: ufw_enabled | bool == True

# Uniform logging level of low
- name: Set logging
  ufw:
    logging: low

- name: setup ufw SSH (port 22) rule
  ufw:
    rule: allow
    port: 22
    proto: tcp
  when: ufw_enabled | bool == True and ufw_ssh | bool == True
  notify:
    - restart ufw

- name: setup ufw HTTPS (port 443) rule
  ufw:
    rule: allow
    port: 443
    proto: tcp
  when: ufw_enabled | bool == True and ufw_https | bool == True
  notify:
    - restart ufw

- name: setup ufw HTTP (port 80) rule
  ufw:
    rule: allow
    port: 80
    proto: tcp
  when: ufw_enabled | bool == True and ufw_http | bool == True
  notify:
- restart ufw

- name: Reenable ufw
  ufw:
    state: enabled
  when: ufw_enabled | bool == True

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants