This is the code base for the University of Alberta Libraries's discovery platform. Based on Project Blacklight.
- Depends on Ruby 2.5.x
- Depends on Java (for SolrMarc and Ingestion scripts)
- Depends on an instance of Solr with this configuration
- If you wish to use docker for the datastores install docker and docker-compose first.
- clone this repository (
git clone https://github.com/ualbertalib/discovery
) - run
docker-compose -f docker-compose.lightweight.yml up -d
- run
bundle install
- run
bundle exec rake db:setup
bundle exec rake ingest[database_test_set] && bundle exec rake ingest[sfx_test_set] && bundle exec rake ingest[symphony_test_set]
(need Java Installed to run this) If you're looking to use a production like dataset or avoid the hassle of running docker and performing these ingest tasks,export SOLR_URL=http://solrcloud-test.library.ualberta.ca:8080/solr/discovery-test
bundle exec rails s
- Point your browser to http://localhost:3000/catalog
Unit and Acceptance Tests
bundle install --without development production
RAILS_ENV=test bundle exec rake db:create
RAILS_ENV=test bundle exec rake db:migrate
bundle exec rake spec
Integration tests (run against http://search-test.library.ualberta.ca/)
cpan WWW::Mechanize && cpan JSON && cpan HTML::TreeBuilder::XPath
to install perl dependencieswget -O /var/tmp/mobyDick.txt http://www.gutenberg.org/ebooks/2701.txt.utf-8
your first visit to gutenberg might give you non utf-8 characters when it says, "hello stranger."cd test/grabBag
./allTests.pl
bundle exec rake assets:precompile
this can take several minutes- create cron jobs to ingest
bundle exec rake ingest[sfx]
,bundle exec rake ingest[databases]
and clean session tablebundle exec rake sessions:cleanup
Go here for information about accessing Discovery UAT instance.
The standard library cataloguing data format is MARC. MARC uses numeric fields to contain bibliographic information in the form of text strings that use a content standard to format the text and, perhaps more importantly, the punctuation. Each MARC field can be subdivided into alphabetical subfields which generally either a) containing repeated elements or b) subdivide the text string. MARC fields and subfields are often written out as e.g. 245$a which means field number 245 (= title), subfield a.
In SolrMarc, the library currently being used to index Blacklight data, the mapping of MARC fields occurs here with more sophisticated data manipulation using BeanShell happening in these scripts. Once the fields have been mapped, they can be designated for search and/or display in the appropriate Solr config file (either schema.xml or solrconfig.xml).
bundle exec rake ingest[collection]
where collection is mainly 'symphony', 'sfx', 'kule' or 'databases'. See config/ingest.yml
for other collections. Most collections are expected to be represented by a file in a ./data
directory.
By default the solr target (:url) is set from '#{Rails.env}' stanza in config/blacklight.yml. Alternately you can set the SOLR_INGEST_URL directly.
export SOLR_INGEST_URL=http://localhost:8983/solr/your-new-solr-collection
bundle exec rake ingest[collection]
unset SOLR_INGEST_URL # if desired