Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

3648 Change "DVN" to default value "producer" and only write the firs… #7094

Closed
wants to merge 73 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
73 commits
Select commit Hold shift + click to select a range
b8d268a
The (very limited) changes that went into the application to accommod…
landreev May 29, 2020
8b1765a
components of the standalone zipper (#6505).
landreev May 29, 2020
e3973d1
handling of folders added to the zipper;
landreev Jun 2, 2020
ad1787a
cosmetic (#6505)
landreev Jun 2, 2020
1dc597b
The modifications allowing the use of the "custom zipper" with the AP…
landreev Jun 16, 2020
ddfc88c
uncommented the line that cleans the request table, on the service ex…
landreev Jun 23, 2020
5402e07
Merge branch 'develop' into 6505-optimize-zip-downloads
landreev Jun 23, 2020
aa923ba
a release note for the "zipper tool". (#6505)
landreev Jun 23, 2020
5d27982
added a section on the zipper service to the "installation/advanced" …
landreev Jun 23, 2020
c99fa60
adding new setting to release notes
djbrooke Jun 24, 2020
c10b516
Public ORCID login is available.
felker13 Jun 23, 2020
8dfe4c4
Update scripts/zipdownload/README.md
landreev Jun 26, 2020
46584da
Update scripts/zipdownload/src/main/java/edu/harvard/iq/dataverse/cus…
landreev Jun 26, 2020
5aaaff5
Better/safer handling of database queries (#6505)
landreev Jun 26, 2020
48a56df
Merge branch '6505-optimize-zip-downloads' of https://github.com/IQSS…
landreev Jun 26, 2020
3eb3976
added a line about the Apache configuration to the installation instr…
landreev Jun 26, 2020
1cd8629
line breaks in the readme (#6505)
landreev Jun 26, 2020
69297fb
small addition to the guide on installation (#6505)
landreev Jun 26, 2020
6e2e396
documents the zipper setting. (#6505)
landreev Jun 26, 2020
72394a4
fixes "original" always being true (#6505)
landreev Jun 26, 2020
96c3708
removed unnecessary repos from pom.xml; a few more words in the advan…
landreev Jun 26, 2020
e01c213
Update doc/sphinx-guides/source/installation/advanced.rst
landreev Jun 26, 2020
9e42aec
Update scripts/zipdownload/README.md
landreev Jun 26, 2020
6100ed6
style/grammar #6505
landreev Jun 26, 2020
aaaa035
Update scripts/zipdownload/README.md
landreev Jun 26, 2020
c5cca50
Update scripts/zipdownload/README.md
landreev Jun 26, 2020
1d4b83f
Update scripts/zipdownload/README.md
landreev Jun 26, 2020
d34ecca
typo
pdurbin Jun 26, 2020
55c2c89
Fix for NPE in test.
felker13 Jun 29, 2020
8da201b
Oauth documentation is extended with ORCID public API option.
felker13 Jun 29, 2020
9d7c843
remove cost, link to ORCID APIs public, member #7025
pdurbin Jun 29, 2020
0bcd871
Merge pull request #1 from IQSS/7025-orcid-docs
felker13 Jun 30, 2020
c1a6126
add download all buttons under access button #6118
pdurbin Jun 30, 2020
3f4ac7f
add size to "download all" link #6118
pdurbin Jul 1, 2020
054f249
add btn-download style class for analytics #6118
pdurbin Jul 2, 2020
140ba41
move ZIP to bundle #6118
pdurbin Jul 2, 2020
427c883
stop using selectedFiles field #6118
pdurbin Jul 7, 2020
1957776
don't double count tabular files in size #6118
pdurbin Jul 7, 2020
8063067
Merge branch 'develop' into orcid-public-api
poikilotherm Jul 8, 2020
5a98e64
Refactor userEndpoint checks for Public ORCID scope. #7025
poikilotherm Jul 8, 2020
9475838
Convert ORCID OAuth2 provider test from JUnit4 to JUnit5. #7025
poikilotherm Jul 8, 2020
d588210
Add test case for public ORCID API endpoint to result in scope /authe…
poikilotherm Jul 8, 2020
f580a80
renamed the flyway script. #6505
landreev Jul 8, 2020
e0e0a45
Merge branch 'develop' into 6505-optimize-zip-downloads
landreev Jul 8, 2020
2fcdfac
Merge branch '6505-optimize-zip-downloads' of https://github.com/IQSS…
landreev Jul 8, 2020
757a120
added support for multiple file stores (#6505)
landreev Jul 9, 2020
7f2bf94
extra words in the doc #6505
landreev Jul 9, 2020
298bfab
Merge pull request #2 from poikilotherm/7025-orcid-public
felker13 Jul 10, 2020
2553845
Fixed the chunking encoding error, that was preventing the download f…
landreev Jul 14, 2020
901fe6f
Fix for #7060
qqmyers Jul 14, 2020
2fa6a9f
remove use of generic WebApplicationException, add 500 exceptionhandler
qqmyers Jul 15, 2020
70ba37c
Merge remote-tracking branch 'IQSS/develop' into IQSS/7060
qqmyers Jul 15, 2020
2e25e0e
#7092 sphinx-build fail on warning
donsizemore Jul 16, 2020
115353c
prevent orig file size from being added to tabular files #6118
pdurbin Jul 16, 2020
7e79f1b
use version to determine if tabular download #6118
pdurbin Jul 16, 2020
5c751a4
3648 Change "DVN" to default value "producer" and only write the firs…
JingMa87 Jul 16, 2020
875374f
Merge pull request #6986 from IQSS/6505-optimize-zip-downloads
kcondon Jul 16, 2020
c13a5cc
switch to smaller, simpler download size method #6118
pdurbin Jul 16, 2020
174ec43
remove unused imports #6118
pdurbin Jul 16, 2020
c9ec158
Merge pull request #7085 from GlobalDataverseCommunityConsortium/IQSS…
kcondon Jul 17, 2020
68ee97b
Update FinalizeDatasetPublicationCommand.java
scolapasta Jul 17, 2020
7062147
Update SolrIndexServiceBean.java
scolapasta Jul 17, 2020
6de6fa4
Update SolrIndexServiceBean.java
scolapasta Jul 17, 2020
14be6c4
Update SolrIndexServiceBean.java
scolapasta Jul 17, 2020
4417a02
make ec2-create-instance.sh downloadable #7099
pdurbin Jul 17, 2020
e26b366
Merge pull request #7093 from OdumInstitute/7092_sphinx_fail_on_warning
kcondon Jul 17, 2020
f64fcff
Merge pull request #7096 from IQSS/6865-index-fix1
kcondon Jul 17, 2020
d3f1f6d
Merge pull request #7047 from IQSS/6118-download
kcondon Jul 20, 2020
f5104d1
Merge pull request #7025 from dsd-sztaki-hu/orcid-public-api
kcondon Jul 20, 2020
941d17d
Merge pull request #7100 from IQSS/7099-docs
kcondon Jul 20, 2020
fe3b901
3648 Change "DVN" to default value "producer" and only write the firs…
JingMa87 Jul 16, 2020
6942c90
3648 Change value to archive.
JingMa87 Jul 22, 2020
40b88f4
Merge remote-tracking branch 'origin/3648-fix-ddi-export' into 3648-f…
JingMa87 Jul 22, 2020
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
30 changes: 30 additions & 0 deletions doc/release-notes/6505-zipdownload-service.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
### A multi-file, zipped download optimization

In this release we are offering an experimental optimization for the
multi-file, download-as-zip functionality. If this option is enabled,
instead of enforcing size limits, we attempt to serve all the files
that the user requested (that they are authorized to download), but
the request is redirected to a standalone zipper service running as a
cgi executable. Thus moving these potentially long-running jobs
completely outside the Application Server (Payara); and preventing
service threads from becoming locked serving them. Since zipping is
also a CPU-intensive task, it is possible to have this service running
on a different host system, thus freeing the cycles on the main
Application Server. (The system running the service needs to have
access to the database as well as to the storage filesystem, and/or S3
bucket).

Please consult the scripts/zipdownload/README.md in the Dataverse 5
source tree.

The components of the standalone "zipper tool" can also be downloaded
here:
(my plan is to build the executable and to add it to the v5
release files on github: - L.A.)
https://github.com/IQSS/dataverse/releases/download/v5.0/zipper.zip.

## New JVM Options and DB Options

### New DB Option CustomZipDownloadServiceUrl

If defined, this is the URL of the zipping service outside the main Application Service where zip downloads should be directed (instead of /api/access/datafiles/)
2 changes: 1 addition & 1 deletion doc/sphinx-guides/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
#

# You can set these variables from the command line.
SPHINXOPTS =
SPHINXOPTS = -W
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = build
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
{
"id":"orcid-public",
"factoryAlias":"oauth2",
"title":"ORCID",
"subtitle":"",
"factoryData":"type: orcid | userEndpoint: https://pub.orcid.org/v2.1/{ORCID}/person | clientId: FIXME | clientSecret: FIXME",
"enabled":true
}
4 changes: 3 additions & 1 deletion doc/sphinx-guides/source/developers/deployment.rst
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,9 @@ Download and Run the "Create Instance" Script

Once you have done the configuration above, you are ready to try running the "ec2-create-instance.sh" script to spin up Dataverse in AWS.

Download :download:`ec2-create-instance.sh<https://raw.githubusercontent.com/GlobalDataverseCommunityConsortium/dataverse-ansible/master/ec2/ec2-create-instance.sh>` and put it somewhere reasonable. For the purpose of these instructions we'll assume it's in the "Downloads" directory in your home directory.
Download `ec2-create-instance.sh`_ and put it somewhere reasonable. For the purpose of these instructions we'll assume it's in the "Downloads" directory in your home directory.

.. _ec2-create-instance.sh: https://raw.githubusercontent.com/GlobalDataverseCommunityConsortium/dataverse-ansible/master/ec2/ec2-create-instance.sh

To run it with default values you just need the script, but you may also want a current copy of the ansible `group vars <https://raw.githubusercontent.com/GlobalDataverseCommunityConsortium/dataverse-ansible/master/defaults/main.yml>`_ file.

Expand Down
48 changes: 48 additions & 0 deletions doc/sphinx-guides/source/installation/advanced.rst
Original file line number Diff line number Diff line change
Expand Up @@ -35,3 +35,51 @@ If you have successfully installed multiple app servers behind a load balancer y
You would repeat the steps above for all of your app servers. If users seem to be having a problem with a particular server, you can ask them to visit https://dataverse.example.edu/host.txt and let you know what they see there (e.g. "server1.example.edu") to help you know which server to troubleshoot.

Please note that :ref:`network-ports` under the Configuration section has more information on fronting your app server with Apache. The :doc:`shibboleth` section talks about the use of ``ProxyPassMatch``.

Optional Components
-------------------

Standalone "Zipper" Service Tool
++++++++++++++++++++++++++++++++

As of Dataverse v5.0 we offer an experimental optimization for the
multi-file, download-as-zip functionality. If this option
(``:CustomZipDownloadServiceUrl``) is enabled, instead of enforcing
the size limit on multi-file zipped downloads (as normally specified
by the option ``:ZipDownloadLimit``), we attempt to serve all the
files that the user requested (that they are authorized to download),
but the request is redirected to a standalone zipper service running
as a cgi-bin executable under Apache. Thus moving these potentially
long-running jobs completely outside the Application Server (Payara);
and preventing worker threads from becoming locked serving them. Since
zipping is also a CPU-intensive task, it is possible to have this
service running on a different host system, freeing the cycles on the
main Application Server. (The system running the service needs to have
access to the database as well as to the storage filesystem, and/or S3
bucket).

Please consult the scripts/zipdownload/README.md in the Dataverse 5
source tree for more information.

To install: You can follow the instructions in the file above to build
``ZipDownloadService-v1.0.0.jar``. It will also be available, pre-built as part of the Dataverse release on GitHub. Copy it, together with the shell
script scripts/zipdownload/cgi-bin/zipdownload to the cgi-bin
directory of the chosen Apache server (/var/www/cgi-bin standard).

Make sure the shell script (zipdownload) is executable, and edit it to configure the
database access credentials. Do note that the executable does not need
access to the entire Dataverse database. A security-conscious admin
can create a dedicated database user with access to just one table:
``CUSTOMZIPSERVICEREQUEST``.

You may need to make extra Apache configuration changes to make sure /cgi-bin/zipdownload is accessible from the outside.
For example, if this is the same Apache that's in front of your Dataverse Payara instance, you will need to add another pass through statement to your configuration:

``ProxyPassMatch ^/cgi-bin/zipdownload !``

Test this by accessing it directly at ``<SERVER URL>/cgi-bin/download``. You should get a ``404 No such download job!``. If instead you are getting an "internal server error", this may be an SELinux issue; try ``setenforce Permissive``. If you are getting a generic Dataverse "not found" page, review the ``ProxyPassMatch`` rule you have added.

To activate in Dataverse::

curl -X PUT -d '/cgi-bin/zipdownload' http://localhost:8080/api/admin/settings/:CustomZipDownloadServiceUrl

13 changes: 13 additions & 0 deletions doc/sphinx-guides/source/installation/config.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2134,3 +2134,16 @@ Unlike other facets, those indexed by Date/Year are sorted chronologically by de
If you don’t want date facets to be sorted chronologically, set:

``curl -X PUT -d 'false' http://localhost:8080/api/admin/settings/:ChronologicalDateFacets``

:CustomZipDownloadServiceUrl
++++++++++++++++++++++++++++

The location of the "Standalone Zipper" service. If this option is specified, Dataverse will be redirecing bulk/mutli-file zip download requests to that location, instead of serving them internally. See the "Advanced" section of the Installation guide for information on how to install the external zipper. (This is still an experimental feature, as of v5.0).

To enable redirects to the zipper installed on the same server as the main Dataverse application:

``curl -X PUT -d '/cgi-bin/zipdownload' http://localhost:8080/api/admin/settings/:CustomZipDownloadServiceUrl``

To enable redirects to the zipper on a different server:

``curl -X PUT -d 'https://zipper.example.edu/cgi-bin/zipdownload' http://localhost:8080/api/admin/settings/:CustomZipDownloadServiceUrl``
7 changes: 4 additions & 3 deletions doc/sphinx-guides/source/installation/oauth2.rst
Original file line number Diff line number Diff line change
Expand Up @@ -26,11 +26,11 @@ Identity Provider Side
Obtain Client ID and Client Secret
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Before OAuth providers will release information about their users (first name, last name, etc.) to your Dataverse installation, you must request a "Client ID" and "Client Secret" from them. In the case of GitHub and Google, this is as simple as clicking a few buttons and there is no cost associated with using their authentication service. ORCID and Microsoft, on the other hand, do not have an automated system for requesting these credentials, and it is not free to use these authentication services.
Before OAuth providers will release information about their users (first name, last name, etc.) to your Dataverse installation, you must request a "Client ID" and "Client Secret" from them. In many cases you can use providers' automated system to request these credentials, but if not, contact the provider for assistance.

URLs to help you request a Client ID and Client Secret from the providers supported by Dataverse are provided below. For all of these providers, it's a good idea to request the Client ID and Client secret using a generic account, perhaps the one that's associated with the ``:SystemEmail`` you've configured for Dataverse, rather than your own personal Microsoft Azure AD, ORCID, GitHub, or Google account:

- ORCID: https://orcid.org/content/register-client-application-production-trusted-party
- ORCID: https://orcid.org/content/register-client-application-0
- Microsoft: https://docs.microsoft.com/en-us/azure/active-directory/develop/v1-protocols-oauth-code
- GitHub: https://github.com/settings/applications/new via https://developer.github.com/v3/oauth/
- Google: https://console.developers.google.com/projectselector/apis/credentials via https://developers.google.com/identity/protocols/OAuth2WebServer (pick "OAuth client ID")
Expand All @@ -51,7 +51,8 @@ As explained under "Auth Modes" in the :doc:`config` section, available authenti

We will ``POST`` a JSON file containing the Client ID and Client Secret to this ``authenticationProviders`` API endpoint to add another authentication provider. As a starting point, you'll want to download the JSON template file matching the provider you're setting up:

- :download:`orcid.json <../_static/installation/files/root/auth-providers/orcid.json>`
- :download:`orcid-public.json <../_static/installation/files/root/auth-providers/orcid-public.json>`
- :download:`orcid-member.json <../_static/installation/files/root/auth-providers/orcid-member.json>`
- :download:`github.json <../_static/installation/files/root/auth-providers/github.json>`
- :download:`google.json <../_static/installation/files/root/auth-providers/google.json>`
- :download:`microsoft.json <../_static/installation/files/root/auth-providers/microsoft.json>`
Expand Down
104 changes: 104 additions & 0 deletions scripts/zipdownload/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,104 @@
Work in progress!

to build:

cd scripts/zipdownload
mvn clean compile assembly:single

to install:

install cgi-bin/zipdownload and ZipDownloadService-v1.0.0.jar in your cgi-bin directory (/var/www/cgi-bin standard).

Edit the config lines in the shell script (zipdownload) as needed.

You may need to make extra Apache configuration changes to make sure /cgi-bin/zipdownload is accessible from the outside.
For example, if this is the same Apache that's in front of your Dataverse Payara instance, you'll need to add another pass through statement to your configuration:

``ProxyPassMatch ^/cgi-bin/zipdownload !``

(see the "Advanced" section of the Installation Guide for some extra troubleshooting tips)

To activate in Dataverse:

curl -X PUT -d '/cgi-bin/zipdownload' http://localhost:8080/api/admin/settings/:CustomZipDownloadServiceUrl

How it works:
=============

(This is an ongoing design discussion - other developers are welcome to contribute)

The goal: to move this potentially long-running task out of the
Application Server. This is the sole focus of this implementation. It
does not attempt to make it faster.

The rationale here is a zipped download of a large enough number of
large enough files will always be slow. Zipping (compressing) itself
is a fairly CPU-intensive task. This will most frequently be the
bottleneck of the service. Although with a slow storage location (S3
or Swift, with a slow link to the share) it may be the speed at which
the application accesses the raw bytes. The exact location of the
bottleneck is in a sense irrelevant. On a very fast system, with the
files stored on a very fast local RAID, the bottleneck for most users
will likely shift to the speed of their internet connection to the
server. The bottom line is, downloading this multi-file compressed
stream will take a long time no matter how you slice it. So this hack
addresses it by moving the task outside Payara, where it's not going
to hog any threads.

A quick, somewhat unrelated note: attempting to download a multi-GB
stream over http will always have its own inherent risks. If the
download has to take hours or days to complete, it is very likely that
it'll break down somewhere in the middle. Do note that for a zipped
download our users will not be able to utilize `wget --continue`, or
any similar "resume" functionality - because it's impossible to resume
generating a zipped stream from a certain offset.

The implementation is a hack. It relies on direct access to everything - storage locations (filesystem or S3) and the database.

There are no network calls between the application (Dataverse) and the zipper (an
implementation relying on such a call was discussed early
on). Dataverse issues a "job key" and sends the user's browser to the
zipper (to, for ex., /cgi-bin/zipdownload?<job key>) instead of
/api/access/datafiles/<file ids>). To authorize the zipdownload for
the "job key", and inform the zipper on which files to zip and where
to find them, the application relies on a database table, that the
zipper also has access to. In other words, there is a saved state
information associated with each zipped download request. Zipper may
be given a limited database access - for example, via a user
authorized to access that one table only. After serving the files, the
zipper removes the database entries. Job records in the database have
time stamps, so on the application side, as an added level of cleanup,
it automatically deletes any records older than 5 minutes (can be
further reduced) every time the service adds new records; as an added
level of cleanup for any records that got stuck in the db because the
corresponding zipper jobs never completed. A paranoid admin may choose
to give the zipper read-only access to the database, and rely on a
cleanup solely on the application side.

I have explored ways to avoid maintaining this state information. A
potential implementation we discussed early on, where the application
would make a network call to the zipper before redirecting the user
there, would NOT solve that problem - the state would need to somehow
be maintained on the zipper side. The only truly stateless
implementation would rely on including all the file information WITH
the redirect itself, with some pre-signed URL mechanism to make it
secure. Mechanisms for pre-signing requests are readily available and
simple to implement. We could go with something similar to how S3
presigns their access URLs. Jim Myers has already speced out how this
could be done for Dataverse access urls in a design document
(https://docs.google.com/document/d/1J8GW6zi-vSRKZdtFjLpmYJ2SUIcIkAEwHkP4q1fxL-s/edit#). (Basically,
you hash the product of your request parameters, the issue timestamp
AND some "secret" - like the user's API key - and send the resulting
hash along with the request. Tampering with any of the parameters, or
trying to extend the life span of the request, becomes impossible,
because it would invalidate the hash). What stopped me from trying
something like that was the sheer size of information that would need
to be included with a request, for a potentially long list of files
that need to be zipped. When serving a zipped download from a page
that would be doable - we could javascript together a POST call that
the browser could make to send all that info to the zipper. But if we
want to implement something similar in the API, I felt like I really
wanted to be able to simply issue a quick redirect to a manageable url
- which with the implementation above is simply
/cgi-bin/zipdownload?<job key>, with the <job key> being just a 16
character hex string in the current implementation.
11 changes: 11 additions & 0 deletions scripts/zipdownload/cgi-bin/zipdownload
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
#!/bin/sh

CLASSPATH=/var/www/cgi-bin; export CLASSPATH

PGHOST="localhost"; export PGHOST
PGPORT=5432; export PGPORT
PGUSER="dvnapp"; export PGUSER
PGDB="dvndb"; export PGDB
PGPW="xxxxx"; export PGPW

java -Ddb.serverName=$PGHOST -Ddb.portNumber=$PGPORT -Ddb.user=$PGUSER -Ddb.databaseName=$PGDB -Ddb.password=$PGPW -jar ZipDownloadService-v1.0.0.jar
86 changes: 86 additions & 0 deletions scripts/zipdownload/pom.xml
Original file line number Diff line number Diff line change
@@ -0,0 +1,86 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>ZipDownloadService</groupId>
<artifactId>ZipDownloadService</artifactId>
<version>1.0.0</version>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
<pluginRepositories>
<pluginRepository>
<id>central</id>
<name>Central Repository</name>
<url>https://repo.maven.apache.org/maven2</url>
<layout>default</layout>
<snapshots>
<enabled>false</enabled>
</snapshots>
<releases>
<updatePolicy>never</updatePolicy>
</releases>
</pluginRepository>
</pluginRepositories>
<repositories>
<repository>
<id>central-repo</id>
<name>Central Repository</name>
<url>https://repo1.maven.org/maven2</url>
<layout>default</layout>
</repository>
</repositories>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-bom</artifactId>
<version>1.11.790</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<!-- https://mvnrepository.com/artifact/org.postgresql/postgresql -->
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>42.2.2</version>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-s3</artifactId>
</dependency>
</dependencies>
<build>
<sourceDirectory>src/main/java</sourceDirectory>
<plugins>
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.1</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-assembly-plugin</artifactId>
<version>2.4</version>
<configuration>
<archive>
<manifest>
<mainClass>edu.harvard.iq.dataverse.custom.service.download.ZipDownloadService</mainClass>
</manifest>
</archive>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
<finalName>${project.artifactId}-v${project.version}</finalName>
<appendAssemblyId>false</appendAssemblyId>
</configuration>
</plugin>
</plugins>
</build>
</project>
Loading