-
Notifications
You must be signed in to change notification settings - Fork 141
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
unclear semantics of debug_mode in /ad, /pub route #453
Comments
I remember being thoroughly confused by the same code area when I was first
starting with psiturk. I don't remember the resolution. I'll try to look
soon.
…On Wed, Oct 28, 2020, 8:51 PM jacob-lee ***@***.***> wrote:
the /ad, /pub route includes a confusing bit of code:
mode = request.args['mode']if hit_id[:5] == "debug":
debug_mode = Trueelse:
debug_mode = False
The confusion here is that request.args['mode'] is set to 'debug' when a
task is launched in debug mode (do_debug). So it seems like pulling it out
of the hit_id is redundant. But, this difference has consequences. In
test_repeat_experiment_fail, the request to /ad is:
request = "&".join([
"assignmentId=%s" % self.assignment_id,
"workerId=%s" % self.worker_id,
"hitId=%s" % self.hit_id,
"mode=debug"])
Note, there is no debug in the hitId. This means that the /ad, /pub route
sets debug_mode = False, despite the mode=debug in the request args.
Note, that if you force it to run in debug mode (by prefacing the hitId
with debug), the test fails, because it doesn't raise the 1010 experiment
error that it expects.
So two questions:
1. Is there a intended difference between mode=debug and
debug_mode=True? Or is this an artifact of development history?
2. When debugging code locally, should requesting the ad after
completing/submitting raise the 1010? Is that the intended behavior?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#453>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAI6Y7IRYZ3DS2PXLBK25I3SNDKELANCNFSM4TDEQOKQ>
.
|
Just fyi, I am working on an alternative implementation of the route. It currently fails several tests, but that is because my code uses requests.args['mode'] directly to test whether it is in debug mode. Currently that implementation is as follows: @app.route('/ad', methods=['GET'])
@app.route('/pub', methods=['GET'])
@nocache
def advertisement():
if not check_browser(request.user_agent.string):
app.logger.warning(f'Browser type not allowed: {request}')
raise ExperimentError('browser_type_not_allowed')
if not all(k in request.args for k in ['hitId', 'assignmentId', 'mode']):
app.logger.error(f'Request args to /ad incomplete: {request.args}')
raise ExperimentError('ad_args_not_set')
hit_id = request.args['hitId']
assignment_id = request.args['assignmentId']
mode = request.args['mode']
worker_id = request.args['workerId'] if 'worker_id' in request.args else None
allow_repeats = CONFIG.getboolean('Task Parameters', 'allow_repeats')
# Just looking at the ad
if assignment_id == 'ASSIGNMENT_ID_NOT_AVAILABLE':
return render_template('ad.html', pop_sub_url='')
# In debug mode we always just show the ad
if mode == 'debug':
return render_template(
'ad.html',
pop_sub_url=f'consent?hitId={hit_id}'
f'&assignmentId={assignment_id}'
f'&workerId={worker_id}'
f'&mode={mode}'
)
# Short-circuit attempted repeaters when repeating disallowed
if not allow_repeats:
try:
nrecords = Participant.query. \
filter(Participant.assignmentid != assignment_id). \
filter(Participant.workerid == worker_id). \
count()
except sqlalchemy.exc.SQLAlchemyError:
app.logger.error('Error counting number records.', exc_info=True)
raise ExperimentError('unknown_error')
if nrecords > 0:
raise ExperimentError('already_did_exp_hit')
# Anything past this point satisfies all of the following:
# - NOT just-looking
# - NOT debug
# - (allow-repeats OR (NOT allow-repeats AND num-records == 0)
try:
result = Participant.query.\
filter(Participant.assignmentid == assignment_id). \
filter(Participant.workerid == worker_id). \
filter(Participant.hitid == hit_id). \
one_or_none()
except (sqlalchemy.exc.SQLAlchemyError,
sqlalchemy.orm.exc.MultipleResultsFound):
app.logger.error('Combination of assignment_id, worker_id, '
'and hit_id not unique.')
raise ExperimentError('hit_assign_appears_in_database_more_than_once')
if result:
if result.status == STARTED or result.status == QUITEARLY:
raise ExperimentError('already_started_exp_mturk')
elif result.status == COMPLETED or result.status == SUBMITTED:
return render_template(
'thanks-mturksubmit.html',
using_sandbox=(mode == "sandbox"),
hitid=hit_id,
assignmentid=assignment_id,
workerid=worker_id
)
else:
app.logger.error(
f'Assignment {assignment_id} exists for hitid '
f'{hit_id} and worker {worker_id} but status '
f'{result.status} is unexpected for an ad request.')
raise ExperimentError('status_incorrectly_set')
# Just show the damn ad already
return render_template(
'ad.html',
pop_sub_url=f'consent?hitId={hit_id}'
f'&assignmentId={assignment_id}'
f'&workerId={worker_id}'
f'&mode={mode}'
) |
The existing code, and the code above, have race-condition issues that may need to be resolved. I haven't decided how serious a concern it is, or figured out how to prevent it. |
Eh? K just be careful there, I'm not sure how thorough those particular
unit tests are, I didn't write those.
…On Wed, Oct 28, 2020, 9:32 PM jacob-lee ***@***.***> wrote:
The existing code, and the code above, have race-condition issues that may
need to be resolved. I haven't decided how serious a concern it is, or
figured out how to prevent it.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#453 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAI6Y7J7JSDV3ANGAKN5G6DSNDO4TANCNFSM4TDEQOKQ>
.
|
And we all know that my tests are immaculate
…On Wed, Oct 28, 2020, 9:43 PM Dave Eargle ***@***.***> wrote:
Eh? K just be careful there, I'm not sure how thorough those particular
unit tests are, I didn't write those.
On Wed, Oct 28, 2020, 9:32 PM jacob-lee ***@***.***> wrote:
> The existing code, and the code above, have race-condition issues that
> may need to be resolved. I haven't decided how serious a concern it is, or
> figured out how to prevent it.
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <#453 (comment)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AAI6Y7J7JSDV3ANGAKN5G6DSNDO4TANCNFSM4TDEQOKQ>
> .
>
|
Sure. Its a particularly critical and gnarly bit of code. I'm taking this one slow. Part of the problem though is that the intended behavior is implicit in the code, so it can be difficult to tell whether existing behavior is a bug or what was desired (e.g. the debug_mode issue). |
I remembered a bit more, although I still haven't reviewed the code.
Mode= in the url has only three possible values from the perspective of
experiment.py -- live, sandbox, or _neither_. If neither, then it triggers
(used to trigger before removal of psiturk ad server?) a different
/complete route. In the forums, we have advised people to use mode=lab or
something more semantically meaningful if they're not using mturk. Anything
besides live or sandbox. Mode gets logged as a column in the db.
Now if the workerid et al are prefixed with "debug", then special things
happen, like no penalty for early quitting or restarting before finishing,
although still no repeats are allowed if a debug id completes the task.
End memory.
…On Wed, Oct 28, 2020, 10:08 PM jacob-lee ***@***.***> wrote:
Sure. Its a particularly critical and gnarly bit of code. I'm taking this
one slow.
Part of the problem though is that the intended behavior is implicit in
the code, so it can be difficult to tell whether existing behavior is a bug
or what was desired (e.g. the debug_mode issue).
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#453 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAI6Y7J6NIKFNI2OV2D2ZN3SNDTFLANCNFSM4TDEQOKQ>
.
|
I am floored. Ok. So, right now, the /exp route sets ad server location to /complete. But it used to do this:
which sounds exactly what you described. The revision of the /ad route that I'm working on only checks against mode == debug, so nothing changes there (though I guess I'd prefer that we make this a documented use case explicitly supported in the code). Do you think that no repeats should be allowed when debugging? I've personally found that to be an annoyance. |
mode == custom could direct to a custom completion route. |
In nrecords = 0
for record in matches:
other_assignment = False
if record.assignmentid != assignment_id:
other_assignment = True
else:
nrecords += 1 As written, this is equivalent to: other_assignment = matches[-1].assignmentid != assignment_id
nrecords = sum([m.assignmentid == assignment_id for m in matches]) I mention this because it seems like |
/exp route has a race condition between the checking of the number of matching records and inserting a new user in the data base. Probably wouldn't happen in normal circumstances, but ... |
the /ad, /pub route includes a confusing bit of code:
The confusion here is that request.args['mode'] is set to 'debug' when a task is launched in debug mode (do_debug). So it seems like pulling it out of the hit_id is redundant. But, this difference has consequences. In
test_repeat_experiment_fail
, the request to /ad is:Note, there is no debug in the hitId. This means that the /ad, /pub route sets
debug_mode = False
, despite themode=debug
in the request args. Note, that if you force it to run in debug mode (by prefacing the hitId withdebug
), the test fails, because it doesn't raise the 1010 experiment error that it expects.So two questions:
The text was updated successfully, but these errors were encountered: