Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Math is hard. Simple math problems are returning the incorrect result. #337

Open
gangrif opened this issue May 11, 2021 · 1 comment
Open
Labels
enhancement new-device Entirely new devices or services P3 Nice to have, not working on it for now

Comments

@gangrif
Copy link

gangrif commented May 11, 2021

I do not know if this is simply an unimplemented feature, but as a simple test I asked Almond to perform some very simple math problems.

The results are obviously wrong.

I asked it for 6*3, it recognized it, and responded with 5.
10 - 2 responded with 7
and 10 * 5 responded with 6

Screenshot from 2021-05-11 12-31-26

@gcampax
Copy link
Contributor

gcampax commented May 11, 2021

Math is not a skill we have yet. It's been requested before but we have not implemented it yet. The parser is picking up those numbers and invoking the random number skill instead.

@gcampax gcampax transferred this issue from stanford-oval/genie-server May 11, 2021
@gcampax gcampax added enhancement new-device Entirely new devices or services labels May 11, 2021
@nrser nrser added question P3 Nice to have, not working on it for now and removed question labels Aug 4, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement new-device Entirely new devices or services P3 Nice to have, not working on it for now
Projects
None yet
Development

No branches or pull requests

3 participants