Skip to content

Analysis refactor gui part7 #2117

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
128 commits
Select commit Hold shift + click to select a range
3601c29
fix #1505
antgonza Jan 2, 2017
0d6788e
improving some GUI stuff
antgonza Jan 3, 2017
12406cc
improving some GUI stuff - missing lines
antgonza Jan 3, 2017
958fcbe
pull upstream master
antgonza Jan 4, 2017
a57ef23
addressing all comments
antgonza Jan 5, 2017
2ead7a6
ready for review
antgonza Jan 5, 2017
73a78e7
fix #1987
antgonza Jan 16, 2017
e64a22a
Merge pull request #2036 from antgonza/fix-1505
josenavas Jan 16, 2017
0dcae8b
Merge pull request #2047 from antgonza/fix-1987
josenavas Jan 17, 2017
4a5bbbc
initial commit
antgonza Jan 18, 2017
f99975c
requested changes
antgonza Jan 18, 2017
ed899a8
Merge pull request #2049 from antgonza/add-processing-suggestions
josenavas Jan 18, 2017
d508320
fix filter job list
antgonza Jan 18, 2017
025cc1e
Merge pull request #2050 from antgonza/fix-filter-job-list
josenavas Jan 18, 2017
599bcde
Fixing server cert (#2051)
josenavas Jan 19, 2017
d12ccfe
fix get_studies
antgonza Jan 20, 2017
b33983b
flake8
antgonza Jan 20, 2017
b4f1b1f
fix #503
antgonza Jan 20, 2017
62a1b93
fix #2010
antgonza Jan 20, 2017
2e36141
fix #1913
antgonza Jan 21, 2017
e006e20
fix errors
antgonza Jan 21, 2017
c174693
Merge pull request #2052 from antgonza/fix-get_studies
josenavas Jan 23, 2017
131dd6a
Merge pull request #2053 from antgonza/fix-by-blinking
josenavas Jan 23, 2017
ccb55bd
addressing @josenavas comment
antgonza Jan 24, 2017
dfe2e83
flake8
antgonza Jan 24, 2017
15fcceb
Merge pull request #2056 from antgonza/fix-1913
josenavas Jan 24, 2017
7f97f2a
fix #1010
antgonza Jan 26, 2017
9eb9dbb
fix #1066 (#2058)
antgonza Jan 26, 2017
23104d7
addressing @josenavas comments
antgonza Jan 27, 2017
1f1e826
fix #1961
antgonza Jan 27, 2017
19a9dda
fix #1837
antgonza Jan 27, 2017
19889f9
Automatic jobs & new stats (#2057)
antgonza Jan 27, 2017
4e380e0
Merge pull request #2060 from antgonza/fix-1961
wasade Jan 28, 2017
6f0dd71
generalizing this functionality
antgonza Jan 28, 2017
ed9fc65
fix #1816
antgonza Jan 29, 2017
4b19b45
fix #1959
antgonza Jan 30, 2017
d9b41e8
addressing @josenavas comments
antgonza Feb 1, 2017
5ef06ae
addressing @josenavas comments
antgonza Feb 2, 2017
5e3504a
fixing error
antgonza Feb 2, 2017
d10096a
Merge branch 'master' of https://github.com/biocore/qiita into fix-1010
antgonza Feb 2, 2017
661342f
fixed?
antgonza Feb 2, 2017
fcd249b
addressing @josenavas comments
antgonza Feb 3, 2017
f3c1216
Merge pull request #2063 from antgonza/fix-1816
josenavas Feb 3, 2017
a91a6fd
Merge pull request #2064 from antgonza/fix-1959
tanaes Feb 3, 2017
7b9fa6f
addressing @wasade comments
antgonza Feb 3, 2017
33bcbe5
Merge pull request #2059 from antgonza/fix-1010
josenavas Feb 3, 2017
5e4bd9b
Merge branch 'master' of https://github.com/biocore/qiita into fix-1837
antgonza Feb 3, 2017
8bf3d6e
fix flake8
antgonza Feb 3, 2017
7807bac
Merge pull request #2061 from antgonza/fix-1837
josenavas Feb 3, 2017
6360675
generate biom and metadata release (#2066)
antgonza Feb 3, 2017
811b7a7
database changes to fix 969
antgonza Feb 3, 2017
751d4ad
adding delete
antgonza Feb 3, 2017
65a86df
addressing @josenavas comments
antgonza Feb 3, 2017
b1817dd
addressing @ElDeveloper comments
antgonza Feb 4, 2017
18d77e1
duh!
antgonza Feb 4, 2017
01c656c
Merge pull request #2071 from antgonza/fix-969-db
josenavas Feb 6, 2017
53188a6
fix generate_biom_and_metadata_release (#2072)
antgonza Feb 7, 2017
1ab4e3b
Fixing merge conflicts with master
josenavas Feb 8, 2017
1e8332e
Merge branch 'analysis-refactor' of https://github.com/biocore/qiita …
josenavas Feb 9, 2017
cb67d3d
Removing qiita ware code that will not be used anymore
josenavas Feb 9, 2017
5a5127d
Merge branch 'analysis-refactor' of https://github.com/biocore/qiita …
josenavas Feb 9, 2017
0033480
Organizing the handlers and new analysis description page
josenavas Feb 9, 2017
3e3f6e1
fixing timestamp
antgonza Feb 9, 2017
6a20c1b
rm formats
antgonza Feb 9, 2017
a1b3c90
st -> pt
antgonza Feb 9, 2017
3809ad5
Connecting the analysis creation and making interface responsive
josenavas Feb 9, 2017
067f14f
Addressing @antgonza's comments
josenavas Feb 10, 2017
cf4862d
Solving merge conflicts
josenavas Feb 10, 2017
3b07151
Initial artifact GUI refactor
josenavas Feb 10, 2017
a6595a9
Removing unused code
josenavas Feb 10, 2017
6343b49
Merge branch 'analysis-refactor-gui-part2' into analysis-refactor-gui…
josenavas Feb 10, 2017
a3505c2
moving to ISO 8601 - wow :'(
antgonza Feb 13, 2017
c8113ea
fix errors
antgonza Feb 13, 2017
f4835d5
addressing @wasade comments
antgonza Feb 13, 2017
f731768
Adding can_edit call to the analysis
josenavas Feb 14, 2017
7542658
Fixing artifact rest API since not all artifacts have study
josenavas Feb 14, 2017
e0180e8
Adding can_be_publicized call to analysis
josenavas Feb 15, 2017
f55ca5c
Adding QiitaHTTPError to handle errors gracefully
josenavas Feb 15, 2017
1fa4b19
Adding safe_execution contextmanager
josenavas Feb 15, 2017
b61ae87
Fixing typo
josenavas Feb 15, 2017
bb68303
Adding qiita test checker
josenavas Feb 15, 2017
b31a025
Adapting some artifact handlers
josenavas Feb 15, 2017
378d7ff
Fixing merge conflicts
josenavas Feb 15, 2017
444da08
Merge branch 'analysis-refactor-gui-part2' into analysis-refactor-gui…
josenavas Feb 15, 2017
f6b4c46
Abstracting the graph reloading and adding some documentation
josenavas Feb 15, 2017
e9d3af3
Fixing typo
josenavas Feb 15, 2017
69b6412
Merge branch 'analysis-refactor-gui-part3' into analysis-refactor-gui…
josenavas Feb 15, 2017
60cd430
Fixing changing artifact visibility
josenavas Feb 15, 2017
be099cb
Fixing delete
josenavas Feb 15, 2017
819e9a5
Fixing artifact deletion
josenavas Feb 15, 2017
e941fa7
Adding default parameters to the commands
josenavas Feb 15, 2017
d6ebcb4
Fixing processing page
josenavas Feb 15, 2017
6ada2ba
Fixing variable name
josenavas Feb 15, 2017
7d70a38
fixing private/public studies
antgonza Feb 15, 2017
e8ca9db
Changing bdiv metrics to single choice
josenavas Feb 15, 2017
4bf4808
Merge pull request #2080 from antgonza/fix-private_public-stats-page
josenavas Feb 15, 2017
aa68a21
Merge pull request #2075 from antgonza/fix-timestamp
wasade Feb 15, 2017
0c6ffa7
sanbox-to-sandbox
antgonza Feb 15, 2017
586660b
flake8
antgonza Feb 15, 2017
6cdc574
Fixing patch
josenavas Feb 15, 2017
7bae13e
fixing other issues
antgonza Feb 16, 2017
cf801a4
Merge pull request #2082 from antgonza/sanbox-to-sandbox
josenavas Feb 16, 2017
7c2454e
adding share documentation
antgonza Mar 3, 2017
c2eb6ae
psycopg2 <= 2.7
antgonza Mar 3, 2017
aeeac62
psycopg2 < 2.7
antgonza Mar 3, 2017
2795046
Merge pull request #2085 from antgonza/sharing-help
josenavas Mar 3, 2017
a77b040
Various small fixes to be able to run tests on the plugins
josenavas Mar 15, 2017
ff9eda9
Merging with master
josenavas Mar 15, 2017
6145976
Adding private module
josenavas Mar 15, 2017
7153efb
Fixing processing job completion
josenavas Mar 15, 2017
46dd73d
Fixing patch 52
josenavas Mar 16, 2017
08eafaa
Fixing call
josenavas Mar 16, 2017
b1a3e99
Fixing complete
josenavas Mar 17, 2017
c9580b7
small fixes
josenavas Mar 17, 2017
85d4aa7
Solving merge conflicts
josenavas Apr 24, 2017
1ffa231
Solving merge conflicts
josenavas Apr 24, 2017
9e14cc6
Merge branch 'analysis-refactor-gui-part5' into analysis-refactor-gui…
josenavas Apr 24, 2017
4cd34d2
Merge branch 'analysis-refactor-gui-part6' into analysis-refactor-gui…
josenavas Apr 24, 2017
bf33527
Solving merge conflicts
josenavas Apr 25, 2017
3529556
Adding processing handlers
josenavas Apr 25, 2017
ca5a331
Fixing url and bug on processing job workflow
josenavas May 1, 2017
b934d68
Adding the private script runner
josenavas May 1, 2017
fa00d60
Adding is_analysis column to the command
josenavas May 1, 2017
b2ac959
Adding retrieval of commands excluding analysis commands
josenavas May 1, 2017
0326a63
Addressing bug on retrieving information from redis
josenavas May 1, 2017
cd6b61c
Enabling the command register endpoint to provide if the command is a…
josenavas May 4, 2017
cccb1d4
Addressing @antgonza's comments
josenavas May 16, 2017
0a584f3
Addressing @wasade's comments
josenavas May 16, 2017
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion qiita_db/handlers/plugin.py
Original file line number Diff line number Diff line change
Expand Up @@ -104,12 +104,14 @@ def post(self, name, version):
if outputs:
outputs = loads(outputs)
dflt_param_set = loads(self.get_argument('default_parameter_sets'))
analysis_only = self.get_argument('analysis_only', False)

parameters = req_params
parameters.update(opt_params)

cmd = qdb.software.Command.create(
plugin, cmd_name, cmd_desc, parameters, outputs)
plugin, cmd_name, cmd_desc, parameters, outputs,
analysis_only=analysis_only)

if dflt_param_set is not None:
for name, vals in dflt_param_set.items():
Expand Down
19 changes: 19 additions & 0 deletions qiita_db/handlers/tests/test_plugin.py
Original file line number Diff line number Diff line change
Expand Up @@ -88,6 +88,25 @@ def test_post(self):
self.assertEqual(obs.code, 200)
obs = _get_command('QIIME', '1.9.1', 'New Command')
self.assertEqual(obs.name, 'New Command')
self.assertFalse(obs.analysis_only)

# Create a new command that is analysis only
data = {
'name': 'New analysis command',
'description': 'Analysis command added for testing',
'required_parameters': dumps(
{'in_data': ['artifact:["BIOM"]', None]}),
'optional_parameters': dumps({'param1': ['string', 'default']}),
'outputs': dumps({'outtable': 'BIOM'}),
'default_parameter_sets': dumps({'dflt1': {'param1': 'test'}}),
'analysis_only': True
}
obs = self.post('/qiita_db/plugins/QIIME/1.9.1/commands/', data=data,
headers=self.header)
self.assertEqual(obs.code, 200)
obs = _get_command('QIIME', '1.9.1', 'New analysis command')
self.assertEqual(obs.name, 'New analysis command')
self.assertTrue(obs.analysis_only)


class CommandHandlerTests(OauthTestingBase):
Expand Down
74 changes: 74 additions & 0 deletions qiita_db/private.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
# -----------------------------------------------------------------------------
# Copyright (c) 2014--, The Qiita Development Team.
#
# Distributed under the terms of the BSD 3-clause License.
#
# The full license is in the file LICENSE, distributed with this software.
# -----------------------------------------------------------------------------

from json import dumps
from sys import exc_info
from time import sleep
import traceback

import qiita_db as qdb


def build_analysis_files(job):
"""Builds the files for an analysis

Parameters
----------
job : qiita_db.processing_job.ProcessingJob
The processing job with the information for building the files
"""
with qdb.sql_connection.TRN:
params = job.parameters.values
analysis_id = params['analysis']
merge_duplicated_sample_ids = params['merge_dup_sample_ids']
analysis = qdb.analysis.Analysis(analysis_id)
biom_files = analysis.build_files(merge_duplicated_sample_ids)

cmd = qdb.software.Command.get_validator('BIOM')
val_jobs = []
for dtype, biom_fp in biom_files:
validate_params = qdb.software.Parameters.load(
cmd, values_dict={'files': dumps({'biom': [biom_fp]}),
'artifact_type': 'BIOM',
'provenance': dumps({'job': job.id,
'data_type': dtype}),
'analysis': analysis_id})
val_jobs.append(qdb.processing_job.ProcessingJob.create(
analysis.owner, validate_params))

job._set_validator_jobs(val_jobs)

for j in val_jobs:
j.submit()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just thinking out loud (not sure if that applies to written stuff, anyway ...) but should we add the sleep directly to the j.submit?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would disagree because the sleep is only needed if submitting multiple jobs, if you're submitting a single one it is not needed, so no reason to stop the processing.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why sleep?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In some cases we found that if we programmatically submit multiple jobs to the torque queue, the resource queue are not updated accordingly and the jobs end up not being spawned fairly across the cluster.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason for the delays historically was to spread load on the resource manager and scheduler as they are singleton resources within a shared environment. Generally speaking, the delays were only important for large volumes of submissions (e.g., > 100). I don't think we're doing that here. If there are issues where the scheduler or resource manager are not handling a small number of jobs appropriately, then it suggests either the jobs themselves are flawed or there is a critical issue with the servers handling the requests. Since we routinely hammer these servers outside of Qiita without delays, it suggests the former is the more likely scenario.

If you're not comfortable removing the delay, then at least set it to <= 100ms as 1s is a lifetime.

I wasn't aware Qiita used a library that links to the Torque C API? Aren't these programatic requests actually system calls?

sleep(1)


TASK_DICT = {'build_analysis_files': build_analysis_files}


def private_task(job_id):
"""Complets a Qiita private task

Parameters
----------
job_id : str
The job id
"""
if job_id == 'register':
# We don't need to do anything here if Qiita is registering plugins
return

job = qdb.processing_job.ProcessingJob(job_id)
job.update_heartbeat_state()
task_name = job.command.name

try:
TASK_DICT[task_name](job)
except Exception:
job.complete(False, error="Error executing private task: %s"
% traceback.format_exception(*exc_info()))
27 changes: 24 additions & 3 deletions qiita_db/processing_job.py
Original file line number Diff line number Diff line change
Expand Up @@ -504,7 +504,8 @@ def _complete_artifact_definition(self, artifact_data):
else:
# The artifact is uploaded by the user or is the initial
# artifact of an analysis
if job_params['analysis'] is not None:
if ('analysis' in job_params and
job_params['analysis'] is not None):
pt = None
an = qdb.analysis.Analysis(job_params['analysis'])
sql = """SELECT data_type
Expand Down Expand Up @@ -567,11 +568,21 @@ def _complete_artifact_transformation(self, artifacts_data):
templates = set()
for artifact in self.input_artifacts:
templates.update(pt.id for pt in artifact.prep_templates)
template = None
analysis = None
if len(templates) > 1:
raise qdb.exceptions.QiitaDBError(
"Currently only single prep template "
"is allowed, found %d" % len(templates))
template = templates.pop()
elif len(templates) == 1:
template = templates.pop()
else:
# In this case we have 0 templates. What this means is that
# this artifact is being generated in the analysis pipeline
# All the artifacts included in the analysis pipeline
# belong to the same analysis, so we can just ask the
# first artifact for the analysis that it belongs to
analysis = self.input_artifacts[0].analysis.id

# Once the validate job completes, it needs to know if it has
# been generated from a command (and how) or if it has been
Expand All @@ -592,6 +603,7 @@ def _complete_artifact_transformation(self, artifacts_data):
cmd, values_dict={'files': dumps(filepaths),
'artifact_type': atype,
'template': template,
'analysis': analysis,
'provenance': dumps(provenance)})
validator_jobs.append(
ProcessingJob.create(self.user, validate_params))
Expand Down Expand Up @@ -1196,7 +1208,16 @@ def _raise_if_not_in_construction(self):
WHERE processing_job_workflow_id = %s"""
qdb.sql_connection.TRN.add(sql, [self.id])
res = qdb.sql_connection.TRN.execute_fetchflatten()
if len(res) != 1 or res[0] != 'in_construction':
# If the above SQL query returns a single element and the value
# is different from in construction, it means that all the jobs
# in the workflow are in the same status and it is not
# 'in_construction', hence raise the error. If the above SQL query
# returns more than value (len(res) > 1) it means that the workflow
# is no longer in construction cause some jobs have been submited
# for processing. Note that if the above query doesn't retrun any
# value, it means that no jobs are in the workflow and that means
# that the workflow is in construction.
if (len(res) == 1 and res[0] != 'in_construction') or len(res) > 1:
# The workflow is no longer in construction, raise an error
raise qdb.exceptions.QiitaDBOperationNotPermittedError(
"Workflow not in construction")
Expand Down
33 changes: 28 additions & 5 deletions qiita_db/software.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,8 @@ class Command(qdb.base.QiitaObject):
_table = "software_command"

@classmethod
def get_commands_by_input_type(cls, artifact_types, active_only=True):
def get_commands_by_input_type(cls, artifact_types, active_only=True,
exclude_analysis=True):
"""Returns the commands that can process the given artifact types

Parameters
Expand All @@ -70,6 +71,8 @@ def get_commands_by_input_type(cls, artifact_types, active_only=True):
WHERE artifact_type IN %s"""
if active_only:
sql += " AND active = True"
if exclude_analysis:
sql += " AND is_analysis = False"
qdb.sql_connection.TRN.add(sql, [tuple(artifact_types)])
for c_id in qdb.sql_connection.TRN.execute_fetchflatten():
yield cls(c_id)
Expand Down Expand Up @@ -191,7 +194,8 @@ def exists(cls, software, name):
return qdb.sql_connection.TRN.execute_fetchlast()

@classmethod
def create(cls, software, name, description, parameters, outputs=None):
def create(cls, software, name, description, parameters, outputs=None,
analysis_only=False):
r"""Creates a new command in the system

The supported types for the parameters are:
Expand Down Expand Up @@ -222,6 +226,9 @@ def create(cls, software, name, description, parameters, outputs=None):
outputs : dict, optional
The description of the outputs that this command generated. The
format is: {output_name: artifact_type}
analysis_only : bool, optional
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does false mean the command is in the analysis pipeline and whatever other pipelines exist? Is it important to be able to restrict commands to non-analysis pipelines? Or, rather, is a two-state variable sufficient to describe the different scenarios faced?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We currently have two pipelines using the artifacts structure: the study pipeline and the (meta-)analysis pipeline. In my opinion, commands in the study pipeline should be available in the meta-analysis pipeline (my example command was open-ref, although we are no longer adding it to the system), but we do want to restrict the analysis pipeline commands on the study pipeline. Given the current status of the project and what we have outlined so far for the future of Qiita I don't foresee adding any other pipeline to the system that will use the artifacts structure as it currently stands.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the logical difference between a study command and an analysis command?

If true, then the command will only be available on the analysis
pipeline. Default: False.

Returns
-------
Expand Down Expand Up @@ -297,10 +304,10 @@ def create(cls, software, name, description, parameters, outputs=None):
% (software.id, name))
# Add the command to the DB
sql = """INSERT INTO qiita.software_command
(name, software_id, description)
VALUES (%s, %s, %s)
(name, software_id, description, is_analysis)
VALUES (%s, %s, %s, %s)
RETURNING command_id"""
sql_params = [name, software.id, description]
sql_params = [name, software.id, description, analysis_only]
qdb.sql_connection.TRN.add(sql, sql_params)
c_id = qdb.sql_connection.TRN.execute_fetchlast()

Expand Down Expand Up @@ -508,6 +515,22 @@ def activate(self):
qdb.sql_connection.TRN.add(sql, [True, self.id])
return qdb.sql_connection.TRN.execute()

@property
def analysis_only(self):
"""Returns if the command is an analysis-only command
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can a command be both an analysis command and "other"? I guess I don't understand what is under the umbrella of "analysis" and what is not

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are currently two options:
(1) The command is available in the analysis pipeline and the study processing pipeline
(2) The command is available only in the analysis pipeline.

I don't think the option "Only on study processing pipeline" should be added as this can potentially limit the opportunities to run meta-analysis, see my previous comment.


Returns
-------
bool
Whether the command is analysis only or not
"""
with qdb.sql_connection.TRN:
sql = """SELECT is_analysis
FROM qiita.software_command
WHERE command_id = %s"""
qdb.sql_connection.TRN.add(sql, [self.id])
return qdb.sql_connection.TRN.execute_fetchlast()


class Software(qdb.base.QiitaObject):
r"""A software package available in the system
Expand Down
7 changes: 6 additions & 1 deletion qiita_db/support_files/patches/52.sql
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,11 @@ ALTER TABLE qiita.analysis ADD logging_id bigint ;
CREATE INDEX idx_analysis_0 ON qiita.analysis ( logging_id ) ;
ALTER TABLE qiita.analysis ADD CONSTRAINT fk_analysis_logging FOREIGN KEY ( logging_id ) REFERENCES qiita.logging( logging_id ) ;

-- Alter the software command table to differentiate between commands that
-- apply to the analysis pipeline or commands that apply on the study
-- processing pipeline
ALTER TABLE qiita.software_command ADD is_analysis bool DEFAULT 'False' NOT NULL;

-- We can handle some of the special cases here, so we simplify the work in the
-- python patch

Expand Down Expand Up @@ -102,7 +107,7 @@ DECLARE
baf_cmd_id bigint;
BEGIN
INSERT INTO qiita.software (name, version, description, environment_script, start_script, software_type_id, active)
VALUES ('Qiita', 'alpha', 'Internal Qiita jobs', 'source activate qiita', 'qiita-private-2', 3, True)
VALUES ('Qiita', 'alpha', 'Internal Qiita jobs', 'source activate qiita', 'qiita-private-plugin', 3, True)
RETURNING software_id INTO qiita_sw_id;

INSERT INTO qiita.software_command (software_id, name, description)
Expand Down
13 changes: 9 additions & 4 deletions qiita_db/support_files/patches/python_patches/52.py
Original file line number Diff line number Diff line change
Expand Up @@ -94,6 +94,7 @@ def create_non_rarefied_biom_artifact(analysis, biom_data, rarefied_table):
# Note that we are sure that the biom table exists for sure, so
# no need to check if biom_fp is undefined
biom_table = load_table(biom_fp)
samples = set(samples).intersection(biom_table.ids())
biom_table.filter(samples, axis='sample', inplace=True)
new_table = new_table.merge(biom_table)
ids_map.update({sid: "%d.%s" % (a_id, sid)
Expand Down Expand Up @@ -498,8 +499,9 @@ def transfer_job(analysis, command_id, params, input_artifact_id, job_data,
qiime_id = TRN.execute_fetchlast()

# Step 2: Insert the new commands in the software_command table
sql = """INSERT INTO qiita.software_command (software_id, name, description)
VALUES (%s, %s, %s)
sql = """INSERT INTO qiita.software_command
(software_id, name, description, is_analysis)
VALUES (%s, %s, %s, TRUE)
RETURNING command_id"""
TRN.add(sql, [qiime_id, 'Summarize Taxa', 'Plots taxonomy summaries at '
'different taxonomy levels'])
Expand Down Expand Up @@ -606,7 +608,7 @@ def transfer_job(analysis, command_id, params, input_artifact_id, job_data,
[sum_taxa_cmd_id, 'Defaults',
'{"sort": false, "metadata_category": ""}'],
[bdiv_cmd_id, 'Unweighted UniFrac',
'{"metrics": "unweighted_unifrac", "tree": ""}'],
'{"metric": "unweighted_unifrac", "tree": ""}'],
[arare_cmd_id, 'Defaults',
'{"max_rare_depth": "Default", "tree": "", "num_steps": 10, '
'"min_rare_depth": 10, "metrics": ["chao1", "observed_otus"]}'],
Expand Down Expand Up @@ -669,7 +671,10 @@ def transfer_job(analysis, command_id, params, input_artifact_id, job_data,
srare_cmd_out_id)
else:
# The BIOM table was not rarefied, use current table as initial
initial_biom_id = transfer_file_to_artifact()
initial_biom_id = transfer_file_to_artifact(
analysis['analysis_id'], analysis['timestamp'], None,
biom_data['data_type_id'], None, 7,
biom_data['filepath_id'])

# Loop through all the jobs that used this biom table as input
sql = """SELECT *
Expand Down
Loading