Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to understand the CrowdComp dataset #2

Open
LSparkzwz opened this issue Sep 24, 2020 · 2 comments
Open

How to understand the CrowdComp dataset #2

LSparkzwz opened this issue Sep 24, 2020 · 2 comments

Comments

@LSparkzwz
Copy link

I see that for each row there's an answer that says if there's a relation or not and an approval score that tells you how much you should trust that value.
The problem is that many A B pairs have multiple rows with different answers and an approval score of 100%, like these highlighted rows for example:
ex

These two rows contradict each other but at the same time they both are deemed as trusted sources.
Am I reading the dataset wrong? Or how are you supposed to deal with these scenarios?

@wRinnori
Copy link

My research requires this dataset very much, but I cannot find it. Do you still have the source of this dataset or the source of the dataset? Could you please provide some information on the source of the dataset? Thank you very much.

@wRinnori
Copy link

I have found this dataset, and I understand it this way. This table is more like the results returned by a complete questionnaire design. The previous ones are all explanatory information, regardless of the order of the experimental results, in Input Source ConceptBase and Input TargetConceptBase represents the two knowledge points currently being inquired about. In the AF-AL column of the table, the statistical results are presented.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants