Skip to content

Analysis refactor allow users change dflt params #2136

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
139 commits
Select commit Hold shift + click to select a range
3601c29
fix #1505
antgonza Jan 2, 2017
0d6788e
improving some GUI stuff
antgonza Jan 3, 2017
12406cc
improving some GUI stuff - missing lines
antgonza Jan 3, 2017
958fcbe
pull upstream master
antgonza Jan 4, 2017
a57ef23
addressing all comments
antgonza Jan 5, 2017
2ead7a6
ready for review
antgonza Jan 5, 2017
73a78e7
fix #1987
antgonza Jan 16, 2017
e64a22a
Merge pull request #2036 from antgonza/fix-1505
josenavas Jan 16, 2017
0dcae8b
Merge pull request #2047 from antgonza/fix-1987
josenavas Jan 17, 2017
4a5bbbc
initial commit
antgonza Jan 18, 2017
f99975c
requested changes
antgonza Jan 18, 2017
ed899a8
Merge pull request #2049 from antgonza/add-processing-suggestions
josenavas Jan 18, 2017
d508320
fix filter job list
antgonza Jan 18, 2017
025cc1e
Merge pull request #2050 from antgonza/fix-filter-job-list
josenavas Jan 18, 2017
599bcde
Fixing server cert (#2051)
josenavas Jan 19, 2017
d12ccfe
fix get_studies
antgonza Jan 20, 2017
b33983b
flake8
antgonza Jan 20, 2017
b4f1b1f
fix #503
antgonza Jan 20, 2017
62a1b93
fix #2010
antgonza Jan 20, 2017
2e36141
fix #1913
antgonza Jan 21, 2017
e006e20
fix errors
antgonza Jan 21, 2017
c174693
Merge pull request #2052 from antgonza/fix-get_studies
josenavas Jan 23, 2017
131dd6a
Merge pull request #2053 from antgonza/fix-by-blinking
josenavas Jan 23, 2017
ccb55bd
addressing @josenavas comment
antgonza Jan 24, 2017
dfe2e83
flake8
antgonza Jan 24, 2017
15fcceb
Merge pull request #2056 from antgonza/fix-1913
josenavas Jan 24, 2017
7f97f2a
fix #1010
antgonza Jan 26, 2017
9eb9dbb
fix #1066 (#2058)
antgonza Jan 26, 2017
23104d7
addressing @josenavas comments
antgonza Jan 27, 2017
1f1e826
fix #1961
antgonza Jan 27, 2017
19a9dda
fix #1837
antgonza Jan 27, 2017
19889f9
Automatic jobs & new stats (#2057)
antgonza Jan 27, 2017
4e380e0
Merge pull request #2060 from antgonza/fix-1961
wasade Jan 28, 2017
6f0dd71
generalizing this functionality
antgonza Jan 28, 2017
ed9fc65
fix #1816
antgonza Jan 29, 2017
4b19b45
fix #1959
antgonza Jan 30, 2017
d9b41e8
addressing @josenavas comments
antgonza Feb 1, 2017
5ef06ae
addressing @josenavas comments
antgonza Feb 2, 2017
5e3504a
fixing error
antgonza Feb 2, 2017
d10096a
Merge branch 'master' of https://github.com/biocore/qiita into fix-1010
antgonza Feb 2, 2017
661342f
fixed?
antgonza Feb 2, 2017
fcd249b
addressing @josenavas comments
antgonza Feb 3, 2017
f3c1216
Merge pull request #2063 from antgonza/fix-1816
josenavas Feb 3, 2017
a91a6fd
Merge pull request #2064 from antgonza/fix-1959
tanaes Feb 3, 2017
7b9fa6f
addressing @wasade comments
antgonza Feb 3, 2017
33bcbe5
Merge pull request #2059 from antgonza/fix-1010
josenavas Feb 3, 2017
5e4bd9b
Merge branch 'master' of https://github.com/biocore/qiita into fix-1837
antgonza Feb 3, 2017
8bf3d6e
fix flake8
antgonza Feb 3, 2017
7807bac
Merge pull request #2061 from antgonza/fix-1837
josenavas Feb 3, 2017
6360675
generate biom and metadata release (#2066)
antgonza Feb 3, 2017
811b7a7
database changes to fix 969
antgonza Feb 3, 2017
751d4ad
adding delete
antgonza Feb 3, 2017
65a86df
addressing @josenavas comments
antgonza Feb 3, 2017
b1817dd
addressing @ElDeveloper comments
antgonza Feb 4, 2017
18d77e1
duh!
antgonza Feb 4, 2017
01c656c
Merge pull request #2071 from antgonza/fix-969-db
josenavas Feb 6, 2017
53188a6
fix generate_biom_and_metadata_release (#2072)
antgonza Feb 7, 2017
1ab4e3b
Fixing merge conflicts with master
josenavas Feb 8, 2017
1e8332e
Merge branch 'analysis-refactor' of https://github.com/biocore/qiita …
josenavas Feb 9, 2017
cb67d3d
Removing qiita ware code that will not be used anymore
josenavas Feb 9, 2017
5a5127d
Merge branch 'analysis-refactor' of https://github.com/biocore/qiita …
josenavas Feb 9, 2017
0033480
Organizing the handlers and new analysis description page
josenavas Feb 9, 2017
3e3f6e1
fixing timestamp
antgonza Feb 9, 2017
6a20c1b
rm formats
antgonza Feb 9, 2017
a1b3c90
st -> pt
antgonza Feb 9, 2017
3809ad5
Connecting the analysis creation and making interface responsive
josenavas Feb 9, 2017
067f14f
Addressing @antgonza's comments
josenavas Feb 10, 2017
cf4862d
Solving merge conflicts
josenavas Feb 10, 2017
3b07151
Initial artifact GUI refactor
josenavas Feb 10, 2017
a6595a9
Removing unused code
josenavas Feb 10, 2017
6343b49
Merge branch 'analysis-refactor-gui-part2' into analysis-refactor-gui…
josenavas Feb 10, 2017
a3505c2
moving to ISO 8601 - wow :'(
antgonza Feb 13, 2017
c8113ea
fix errors
antgonza Feb 13, 2017
f4835d5
addressing @wasade comments
antgonza Feb 13, 2017
f731768
Adding can_edit call to the analysis
josenavas Feb 14, 2017
7542658
Fixing artifact rest API since not all artifacts have study
josenavas Feb 14, 2017
e0180e8
Adding can_be_publicized call to analysis
josenavas Feb 15, 2017
f55ca5c
Adding QiitaHTTPError to handle errors gracefully
josenavas Feb 15, 2017
1fa4b19
Adding safe_execution contextmanager
josenavas Feb 15, 2017
b61ae87
Fixing typo
josenavas Feb 15, 2017
bb68303
Adding qiita test checker
josenavas Feb 15, 2017
b31a025
Adapting some artifact handlers
josenavas Feb 15, 2017
378d7ff
Fixing merge conflicts
josenavas Feb 15, 2017
444da08
Merge branch 'analysis-refactor-gui-part2' into analysis-refactor-gui…
josenavas Feb 15, 2017
f6b4c46
Abstracting the graph reloading and adding some documentation
josenavas Feb 15, 2017
e9d3af3
Fixing typo
josenavas Feb 15, 2017
69b6412
Merge branch 'analysis-refactor-gui-part3' into analysis-refactor-gui…
josenavas Feb 15, 2017
60cd430
Fixing changing artifact visibility
josenavas Feb 15, 2017
be099cb
Fixing delete
josenavas Feb 15, 2017
819e9a5
Fixing artifact deletion
josenavas Feb 15, 2017
e941fa7
Adding default parameters to the commands
josenavas Feb 15, 2017
d6ebcb4
Fixing processing page
josenavas Feb 15, 2017
6ada2ba
Fixing variable name
josenavas Feb 15, 2017
7d70a38
fixing private/public studies
antgonza Feb 15, 2017
e8ca9db
Changing bdiv metrics to single choice
josenavas Feb 15, 2017
4bf4808
Merge pull request #2080 from antgonza/fix-private_public-stats-page
josenavas Feb 15, 2017
aa68a21
Merge pull request #2075 from antgonza/fix-timestamp
wasade Feb 15, 2017
0c6ffa7
sanbox-to-sandbox
antgonza Feb 15, 2017
586660b
flake8
antgonza Feb 15, 2017
6cdc574
Fixing patch
josenavas Feb 15, 2017
7bae13e
fixing other issues
antgonza Feb 16, 2017
cf801a4
Merge pull request #2082 from antgonza/sanbox-to-sandbox
josenavas Feb 16, 2017
7c2454e
adding share documentation
antgonza Mar 3, 2017
c2eb6ae
psycopg2 <= 2.7
antgonza Mar 3, 2017
aeeac62
psycopg2 < 2.7
antgonza Mar 3, 2017
2795046
Merge pull request #2085 from antgonza/sharing-help
josenavas Mar 3, 2017
a77b040
Various small fixes to be able to run tests on the plugins
josenavas Mar 15, 2017
ff9eda9
Merging with master
josenavas Mar 15, 2017
6145976
Adding private module
josenavas Mar 15, 2017
7153efb
Fixing processing job completion
josenavas Mar 15, 2017
46dd73d
Fixing patch 52
josenavas Mar 16, 2017
08eafaa
Fixing call
josenavas Mar 16, 2017
b1a3e99
Fixing complete
josenavas Mar 17, 2017
c9580b7
small fixes
josenavas Mar 17, 2017
85d4aa7
Solving merge conflicts
josenavas Apr 24, 2017
1ffa231
Solving merge conflicts
josenavas Apr 24, 2017
9e14cc6
Merge branch 'analysis-refactor-gui-part5' into analysis-refactor-gui…
josenavas Apr 24, 2017
4cd34d2
Merge branch 'analysis-refactor-gui-part6' into analysis-refactor-gui…
josenavas Apr 24, 2017
bf33527
Solving merge conflicts
josenavas Apr 25, 2017
3529556
Adding processing handlers
josenavas Apr 25, 2017
ca5a331
Fixing url and bug on processing job workflow
josenavas May 1, 2017
b934d68
Adding the private script runner
josenavas May 1, 2017
fa00d60
Adding is_analysis column to the command
josenavas May 1, 2017
b2ac959
Adding retrieval of commands excluding analysis commands
josenavas May 1, 2017
0326a63
Addressing bug on retrieving information from redis
josenavas May 1, 2017
cd6b61c
Enabling the command register endpoint to provide if the command is a…
josenavas May 4, 2017
cccb1d4
Addressing @antgonza's comments
josenavas May 16, 2017
0a584f3
Addressing @wasade's comments
josenavas May 16, 2017
a33d14f
Supporting multiple choice
josenavas May 17, 2017
66986ab
Adding documentation
josenavas May 17, 2017
3e36212
Modifying handler to pass allow_change_optionals
josenavas May 22, 2017
e04d03d
Solving merge conflicts
josenavas May 22, 2017
dc3a51e
returning optional parameters
josenavas May 22, 2017
12235a7
Addressing bug found by @antgonza
josenavas May 22, 2017
f5c5811
Enabling changing the default parameters
josenavas May 22, 2017
08554e1
Adding correct class
josenavas May 22, 2017
04a0c26
Allowing user to change default parameters
josenavas May 24, 2017
4330dae
Fixing bug with commands listing
josenavas May 24, 2017
3b8acdc
Addressing @wasade's comments
josenavas May 31, 2017
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions qiita_db/handlers/plugin.py
Original file line number Diff line number Diff line change
Expand Up @@ -100,6 +100,11 @@ def post(self, name, version):
cmd_desc = self.get_argument('description')
req_params = loads(self.get_argument('required_parameters'))
opt_params = loads(self.get_argument('optional_parameters'))

for p_name, (p_type, dflt) in opt_params.items():
if p_type.startswith('mchoice'):
opt_params[p_name] = [p_type, loads(dflt)]

outputs = self.get_argument('outputs', None)
if outputs:
outputs = loads(outputs)
Expand Down
9 changes: 6 additions & 3 deletions qiita_db/handlers/tests/test_plugin.py
Original file line number Diff line number Diff line change
Expand Up @@ -74,9 +74,12 @@ def test_post(self):
'description': 'Command added for testing',
'required_parameters': dumps(
{'in_data': ['artifact:["FASTA"]', None]}),
'optional_parameters': dumps({'param1': ['string', ''],
'param2': ['float', '1.5'],
'param3': ['boolean', 'True']}),
'optional_parameters': dumps(
{'param1': ['string', ''],
'param2': ['float', '1.5'],
'param3': ['boolean', 'True'],
'param4': ['mchoice:["opt1", "opt2", "opt3"]',
dumps(['opt1', 'opt2'])]}),
'outputs': dumps({'out1': 'BIOM'}),
'default_parameter_sets': dumps(
{'dflt1': {'param1': 'test',
Expand Down
33 changes: 27 additions & 6 deletions qiita_db/software.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,8 @@ def get_commands_by_input_type(cls, artifact_types, active_only=True,
active_only : bool, optional
If True, return only active commands, otherwise return all commands
Default: True
exclude_analysis : bool, optional
If True, return commands that are not part of the analysis pipeline

Returns
-------
Expand Down Expand Up @@ -272,16 +274,25 @@ def create(cls, software, name, description, parameters, outputs=None,
supported_types = ['string', 'integer', 'float', 'reference',
'boolean', 'prep_template']
if ptype not in supported_types and not ptype.startswith(
('choice', 'artifact')):
supported_types.extend(['choice', 'artifact'])
('choice', 'mchoice', 'artifact')):
supported_types.extend(['choice', 'mchoice', 'artifact'])
raise qdb.exceptions.QiitaDBError(
"Unsupported parameters type '%s' for parameter %s. "
"Supported types are: %s"
% (ptype, pname, ', '.join(supported_types)))

if ptype.startswith('choice') and dflt is not None:
choices = loads(ptype.split(':')[1])
if dflt not in choices:
if ptype.startswith(('choice', 'mchoice')) and dflt is not None:
choices = set(loads(ptype.split(':')[1]))
dflt_val = dflt
if ptype.startswith('choice'):
# In the choice case, the dflt value is a single string,
# create a list with it the string on it to use the
# issuperset call below
dflt_val = [dflt_val]
else:
# jsonize the list to store it in the DB
dflt = dumps(dflt)
if not choices.issuperset(dflt_val):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this check be done immediately after line 285?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nope, line 291 makes sure that dflt_val is a list so this call works as expected.

raise qdb.exceptions.QiitaDBError(
"The default value '%s' for the parameter %s is not "
"listed in the available choices: %s"
Expand Down Expand Up @@ -452,7 +463,17 @@ def optional_parameters(self):
WHERE command_id = %s AND required = false"""
qdb.sql_connection.TRN.add(sql, [self.id])
res = qdb.sql_connection.TRN.execute_fetchindex()
return {pname: [ptype, dflt] for pname, ptype, dflt in res}

# Define a function to load the json storing the default parameters
# if ptype is multiple choice. When I added it to the for loop as
# a one liner if, made the code a bit hard to read
def dflt_fmt(dflt, ptype):
if ptype.startswith('mchoice'):
return loads(dflt)
return dflt

return {pname: [ptype, dflt_fmt(dflt, ptype)]
for pname, ptype, dflt in res}

@property
def default_parameter_sets(self):
Expand Down
10 changes: 4 additions & 6 deletions qiita_db/test/test_software.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,8 @@ def setUp(self):
'req_param': ['string', None],
'opt_int_param': ['integer', '4'],
'opt_choice_param': ['choice:["opt1", "opt2"]', 'opt1'],
'opt_mchoice_param': ['mchoice:["opt1", "opt2", "opt3"]',
['opt1', 'opt2']],
'opt_bool': ['boolean', 'False']}
self.outputs = {'out1': 'BIOM'}

Expand Down Expand Up @@ -291,6 +293,8 @@ def test_create(self):
exp_optional = {
'opt_int_param': ['integer', '4'],
'opt_choice_param': ['choice:["opt1", "opt2"]', 'opt1'],
'opt_mchoice_param': ['mchoice:["opt1", "opt2", "opt3"]',
['opt1', 'opt2']],
'opt_bool': ['boolean', 'False']}
self.assertEqual(obs.optional_parameters, exp_optional)
self.assertFalse(obs.analysis_only)
Expand All @@ -300,13 +304,7 @@ def test_create(self):
self.parameters, analysis_only=True)
self.assertEqual(obs.name, "Test Command 2")
self.assertEqual(obs.description, "This is a command for testing")
exp_required = {'req_param': ('string', [None]),
'req_art': ('artifact', ['BIOM'])}
self.assertEqual(obs.required_parameters, exp_required)
exp_optional = {
'opt_int_param': ['integer', '4'],
'opt_choice_param': ['choice:["opt1", "opt2"]', 'opt1'],
'opt_bool': ['boolean', 'False']}
self.assertEqual(obs.optional_parameters, exp_optional)
self.assertTrue(obs.analysis_only)

Expand Down
32 changes: 17 additions & 15 deletions qiita_pet/handlers/api_proxy/processing.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,13 +41,15 @@ def process_artifact_handler_get_req(artifact_id):
'study_id': artifact.study.id}


def list_commands_handler_get_req(artifact_types):
def list_commands_handler_get_req(artifact_types, exclude_analysis):
"""Retrieves the commands that can process the given artifact types

Parameters
----------
artifact_types : str
Comma-separated list of artifact types
exclude_analysis : bool
If True, return commands that are not part of the analysis pipeline

Returns
-------
Expand All @@ -62,7 +64,8 @@ def list_commands_handler_get_req(artifact_types):
artifact_types = artifact_types.split(',')
cmd_info = [
{'id': cmd.id, 'command': cmd.name, 'output': cmd.outputs}
for cmd in Command.get_commands_by_input_type(artifact_types)]
for cmd in Command.get_commands_by_input_type(
artifact_types, exclude_analysis=exclude_analysis)]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm remain not jazzed about the arbitrary binary partitioning of commands but I think I'm out voted on this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The "arbitrary" partitioning is matching the "upstream" and "downstream" commands that has been in Qiime for a long time. i.e. commands that you need to run to get a BIOM table, and commands that you run in a BIOM table to gain insight.


return {'status': 'success',
'message': '',
Expand Down Expand Up @@ -92,21 +95,22 @@ def list_options_handler_get_req(command_id):
return {'status': 'success',
'message': '',
'options': options,
'req_options': command.required_parameters}
'req_options': command.required_parameters,
'opt_options': command.optional_parameters}


def workflow_handler_post_req(user_id, dflt_params_id, req_params):
def workflow_handler_post_req(user_id, command_id, params):
"""Creates a new workflow in the system

Parameters
----------
user_id : str
The user creating the workflow
dflt_params_id : int
The default parameters to use for the first command of the workflow
req_params : str
JSON representations of the required parameters for the first
command of the workflow
command_id : int
The first command to execute in the workflow
params : str
JSON representations of the parameters for the first command of
the workflow

Returns
-------
Expand All @@ -116,9 +120,7 @@ def workflow_handler_post_req(user_id, dflt_params_id, req_params):
'message': str,
'workflow_id': int}
"""
dflt_params = DefaultParameters(dflt_params_id)
req_params = loads(req_params)
parameters = Parameters.from_default_params(dflt_params, req_params)
parameters = Parameters.load(Command(command_id), json_str=params)
wf = ProcessingWorkflow.from_scratch(User(user_id), parameters)
# this is safe as we are creating the workflow for the first time and there
# is only one node. Remember networkx doesn't assure order of nodes
Expand All @@ -136,14 +138,14 @@ def workflow_handler_post_req(user_id, dflt_params_id, req_params):

def workflow_handler_patch_req(req_op, req_path, req_value=None,
req_from=None):
"""Patches an ontology
"""Patches a workflow

Parameters
----------
req_op : str
The operation to perform on the ontology
The operation to perform on the workflow
req_path : str
The ontology to patch
Path parameter with the workflow to patch
req_value : str, optional
The value that needs to be modified
req_from : str, optional
Expand Down
46 changes: 38 additions & 8 deletions qiita_pet/handlers/api_proxy/tests/test_processing.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,6 @@
from json import dumps

from qiita_core.util import qiita_test_checker
from qiita_db.util import get_count
from qiita_db.processing_job import ProcessingWorkflow
from qiita_db.software import Command, Parameters
from qiita_db.user import User
Expand Down Expand Up @@ -38,20 +37,36 @@ def test_process_artifact_handler_get_req(self):
self.assertEqual(obs, exp)

def test_list_commands_handler_get_req(self):
obs = list_commands_handler_get_req('FASTQ')
obs = list_commands_handler_get_req('FASTQ', True)
exp = {'status': 'success',
'message': '',
'commands': [{'id': 1, 'command': 'Split libraries FASTQ',
'output': [['demultiplexed', 'Demultiplexed']]}]}
self.assertEqual(obs, exp)

obs = list_commands_handler_get_req('Demultiplexed')
obs = list_commands_handler_get_req('Demultiplexed', True)
exp = {'status': 'success',
'message': '',
'commands': [{'id': 3, 'command': 'Pick closed-reference OTUs',
'output': [['OTU table', 'BIOM']]}]}
self.assertEqual(obs, exp)

obs = list_commands_handler_get_req('BIOM', False)
exp = {'status': 'success',
'message': '',
'commands': [
{'command': 'Summarize Taxa', 'id': 11,
'output': [['taxa_summary', 'taxa_summary']]},
{'command': 'Beta Diversity', 'id': 12,
'output': [['distance_matrix', 'distance_matrix']]},
{'command': 'Alpha Rarefaction', 'id': 13,
'output': [['rarefaction_curves', 'rarefaction_curves']]},
{'command': 'Single Rarefaction', 'id': 14,
'output': [['rarefied_table', 'BIOM']]}]}
# since the order of the commands can change, test them separately
self.assertItemsEqual(obs.pop('commands'), exp.pop('commands'))
self.assertEqual(obs, exp)

def test_list_options_handler_get_req(self):
obs = list_options_handler_get_req(3)
exp = {'status': 'success',
Expand All @@ -64,11 +79,20 @@ def test_list_options_handler_get_req(self):
'sortmerna_e_value': 1,
'sortmerna_max_pos': 10000,
'threads': 1}}],
'req_options': {'input_data': ('artifact', ['Demultiplexed'])}}
'req_options': {'input_data': ('artifact', ['Demultiplexed'])},
'opt_options': {'reference': ['reference', '1'],
'similarity': ['float', '0.97'],
'sortmerna_coverage': ['float', '0.97'],
'sortmerna_e_value': ['float', '1'],
'sortmerna_max_pos': ['integer', '10000'],
'threads': ['integer', '1']}}
# First check that the keys are the same
self.assertItemsEqual(obs, exp)
self.assertEqual(obs['status'], exp['status'])
self.assertEqual(obs['message'], exp['message'])
self.assertEqual(obs['options'], exp['options'])
self.assertEqual(obs['req_options'], exp['req_options'])
self.assertEqual(obs['opt_options'], exp['opt_options'])

def test_job_ajax_get_req(self):
obs = job_ajax_get_req("063e553b-327c-4818-ab4a-adfe58e49860")
Expand All @@ -94,15 +118,21 @@ def test_job_ajax_get_req(self):
@qiita_test_checker()
class TestProcessingAPI(TestCase):
def test_workflow_handler_post_req(self):
next_id = get_count('qiita.processing_job_workflow_root') + 1
obs = workflow_handler_post_req("test@foo.bar", 1, '{"input_data": 1}')
wf = ProcessingWorkflow(next_id)
params = ('{"max_barcode_errors": 1.5, "barcode_type": "golay_12", '
'"max_bad_run_length": 3, "phred_offset": "auto", '
'"rev_comp": false, "phred_quality_threshold": 3, '
'"input_data": 1, "rev_comp_barcode": false, '
'"rev_comp_mapping_barcodes": false, '
'"min_per_read_length_fraction": 0.75, "sequence_max_n": 0}')
obs = workflow_handler_post_req("test@foo.bar", 1, params)
wf_id = obs['workflow_id']
wf = ProcessingWorkflow(wf_id)
nodes = wf.graph.nodes()
self.assertEqual(len(nodes), 1)
job = nodes[0]
exp = {'status': 'success',
'message': '',
'workflow_id': next_id,
'workflow_id': wf_id,
'job': {'id': job.id,
'inputs': [1],
'label': "Split libraries FASTQ",
Expand Down
3 changes: 2 additions & 1 deletion qiita_pet/handlers/artifact_handlers/process_handlers.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,8 @@ def process_artifact_handler_get_req(artifact_id):
'message': '',
'name': artifact.name,
'type': artifact.artifact_type,
'artifact_id': artifact.id}
'artifact_id': artifact.id,
'allow_change_optionals': artifact.analysis is not None}


class ProcessArtifactHandler(BaseHandler):
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
from unittest import TestCase, main

from qiita_core.util import qiita_test_checker
from qiita_pet.test.tornado_test_base import TestHandlerBase
from qiita_pet.handlers.artifact_handlers.process_handlers import (
process_artifact_handler_get_req)

Expand All @@ -17,9 +18,36 @@
class TestProcessHandlersUtils(TestCase):
def test_process_artifact_handler_get_req(self):
obs = process_artifact_handler_get_req(1)
exp = {}
exp = {'status': 'success',
'message': '',
'name': 'Raw data 1',
'type': 'FASTQ',
'artifact_id': 1,
'allow_change_optionals': False}
self.assertEqual(obs, exp)

obs = process_artifact_handler_get_req(8)
exp = {'status': 'success',
'message': '',
'name': 'noname',
'type': 'BIOM',
'artifact_id': 8,
'allow_change_optionals': True}
self.assertEqual(obs, exp)


class TestProcessHandlers(TestHandlerBase):
def test_get_process_artifact_handler(self):
response = self.get("/artifact/1/process/")
self.assertEqual(response.code, 200)
self.assertNotEqual(response.body, "")
self.assertIn('load_artifact_type(params.nodes, false);',
response.body)

response = self.get("/artifact/8/process/")
self.assertEqual(response.code, 200)
self.assertNotEqual(response.body, "")
self.assertIn('load_artifact_type(params.nodes, true);', response.body)

if __name__ == '__main__':
main()
10 changes: 6 additions & 4 deletions qiita_pet/handlers/study_handlers/processing.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,9 @@ def get(self):
# Fun fact - if the argument is a list, JS adds '[]' to the
# argument name
artifact_types = self.get_argument("artifact_types[]")
self.write(list_commands_handler_get_req(artifact_types))
exclude_analysis = self.get_argument('include_analysis') == 'false'
self.write(
list_commands_handler_get_req(artifact_types, exclude_analysis))


class ListOptionsHandler(BaseHandler):
Expand All @@ -46,10 +48,10 @@ def post(self):
class WorkflowHandler(BaseHandler):
@authenticated
def post(self):
dflt_params_id = self.get_argument('dflt_params_id')
req_params = self.get_argument('req_params')
command_id = self.get_argument('command_id')
params = self.get_argument('params')
self.write(workflow_handler_post_req(
self.current_user.id, dflt_params_id, req_params))
self.current_user.id, command_id, params))

@authenticated
def patch(self):
Expand Down
Loading