-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dose response benchmarking data #208
Comments
Dear Anna, Here would be a csv file of the fit table (not filtered for only hits, without the You should be able to run all of these analyses yourself with the datasets we provide in protti. For this case we use the The rapamycin data set is a good one for benchmarking, as in this experiment we expect very few off-target hits or hits that are difficult to explain (secondary effects). We also reduced the size of the original data set by only including some random proteins and the target. You could create an artificial data set with known ground-truth using the You could otherwise also try this data set here: https://www.ebi.ac.uk/pride/archive/projects/PXD038768 I hope this helps! All the best, |
Thanks a lot for this thorough reply Dina! I really appreciate it. I'm going to use protti internal data for my benchmarking tests for the moment. The main issue that I noticed with the These good hits from your dataset are really helpful as I can at least check that the dose-response curve makes sense for the benchmarking data. Thanks a lot again! Anna |
Hi Dina, I'm re-opening this issue with a quick question about the Do you confirm that the I'm asking this because from the but in Fig 1d the dose only reaches 10^4. ![]() Thanks for your help! Anna |
Hi Anna, The highest concentration in the paper is So the plot should be the same between paper and protti, but the dataset |
Hi there,
Thanks a lot for developing the
protti
package and for putting together the R workflow in this document https://jpquast.github.io/protti/articles/data_analysis_dose_response_workflow.html. !I'm interested in the dose-response work and I am running your package (and therefore the
drc
package and estimates) on different operating systems in production on our web platform.One thing that I noticed is that because of the optimisation with
optim
the model estimates withdrc
can be slightly different and it's hard for me to benchmark results so I'm looking for a ground truth dataset where I can compare results and make sure I'm getting always what I need to. Are you able to share a csv with the results that you show in theall hits
table (top rows below) so that I can compare the actual EC50 values in your table with what I obtain without the approximation. I noticed that the estimate of the EC50 could vary quite a bit between OSs.Also, do you have other benchmarking data of which you know dose-response curves and effect and that users could use for benchmarking?
Thanks a lot for this!
Anna
The text was updated successfully, but these errors were encountered: