Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Query context / intention #319

Open
brettasmi opened this issue Mar 17, 2022 · 5 comments
Open

Query context / intention #319

brettasmi opened this issue Mar 17, 2022 · 5 comments
Milestone

Comments

@brettasmi
Copy link
Contributor

It would be useful to be able to encode a user's "intention" or "context" in the query graph. An ARA might want to use this information to rank results, and a KP might be able to return results specific to a context.

How should this parameter be encoded in the query graph? A simple parameter, e.g. context, with an array of CURIEs could be appropriate. However, qualifiers could be used on an edge-by-edge basis to guide the KP behavior.

Also, there are some discussions ongoing in the data modeling group that cover "context," but @mbrush confirms that they are lower priority for the time being.

Because this issue touches multiple working groups, it has been elevated to the architecture group as well. Any implementation here should take into account discussions in that group.

@edeutsch
Copy link
Collaborator

edeutsch commented Apr 7, 2022

It seems that many discussions have led to the recommendation that ARAs could look for and honor an additional property Query.query_context that can contain a CURIE (or a list of CURIEs?) that specify a context for evaluating a Query and/or ranking the final results.

@edeutsch
Copy link
Collaborator

edeutsch commented Apr 7, 2022

A very specific example of a specific query and the specific context you want to provide, and exactly how it influences the results would be most helpful for me to understand this issue better.

@brettasmi
Copy link
Contributor Author

Unfortunately, we don't have an example to discuss for the "query_context" today. While I did write an example, we had to cancel our internal imProving meeting during which we planned to discuss it, so we should probably delay that discussion if possible.

We were able to have a minor discussion asynchronously that perhaps the term "context" isn't the best choice here, as this is intended to be a concept with which to rank (at least that was my original intention). As such, something like "ranking_hint" or "ranking_prompt" may be better for clarity.

@ehinderer
Copy link
Contributor

ehinderer commented Apr 21, 2022

I agree that in theory the "level" or "degree" of knowledge about a statement can help to provide relevancy to query results. The discussion in #324 gives a good overview of the kinds contexts that a user might be interested in. For instance if a user doesn't want to see "no brainer" answers, they might filter out "known to treat" and may be more interested in "indirectly inferred to treat."

Before I can comment on about how to structure the query, I need to understand how to answer those kinds of questions, conceptually. For instance. what is meant by "inferred"? Are we expecting KPs to literally have edges/statements that amount to the phrase "indirectly inferred to" or are we expecting the ARAs to reason across "known to" statements and make the "indirectly inferred to" statement at that level? If its the former, then the query would constrain on QG edge types, if its the latter, then the query would specify sophisticated ARA operations. Maybe it's a combination of the two?

edit: I wanted to also add that if we enumerate all of the ways that context can be given, and then realize that it's not reasonable or possible to respond to all of them, then we run the risk of overengineering the solution. For instance, we might find out that it's not practically possible to distinguish "observed in data to" from "known to" or the overlap is so large that the distinction meaningless.

@brettasmi
Copy link
Contributor Author

Unfortunately, this may need to wait for a few weeks or even until the Relay. Due to a number of reasons in the past month, we haven't had sufficient time to discuss this in our internal meetings and formulate an example.

If this needs to pushed to the next version of TRAPI, that's fine; it wasn't voted on by the group as a priority in November or February anyway.

@vdancik vdancik added this to the v1.4 milestone Aug 25, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants