Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

binary intrinsic declarations #205

Closed
wants to merge 2 commits into from
Closed

binary intrinsic declarations #205

wants to merge 2 commits into from

Conversation

sn6uv
Copy link

@sn6uv sn6uv commented Sep 13, 2016

Fixes binary intrinsic (e.g. llvm.maxnum) IR generation.

Note: I haven't tested e.g. llvm.convert.from.fp16.

@coveralls
Copy link

Coverage Status

Coverage increased (+0.02%) to 93.457% when pulling 4a47ddc on sn6uv:master into 8a324e1 on numba:master.

@codecov-io
Copy link

Current coverage is 91.93% (diff: 100%)

Merging #205 into master will increase coverage by 0.01%

@@             master       #205   diff @@
==========================================
  Files            32         32          
  Lines          4574       4585    +11   
  Methods           0          0          
  Messages          0          0          
  Branches        319        322     +3   
==========================================
+ Hits           4204       4215    +11   
  Misses          300        300          
  Partials         70         70          

Powered by Codecov. Last update 8a324e1...4a47ddc

sn6uv added a commit to sn6uv/Mathics that referenced this pull request Sep 13, 2016
@sn6uv
Copy link
Author

sn6uv commented Sep 20, 2016

Any word on this? Is anyone aware of a workaround to use in the meantime?

@pitrou
Copy link
Contributor

pitrou commented Sep 20, 2016

Oops, sorry. I hadn't noticed this PR.
The workaround is simply to construct the Function object yourself.

@@ -172,10 +178,13 @@ def _error():
fnty = types.FunctionType(tys[0], tys*2)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

llvm.pow is already handled here (note you can also pass the function type fnty explicitly).

@sn6uv
Copy link
Author

sn6uv commented Sep 21, 2016

I understand better what's going on now and my patch isn't quite right. The solution for me is just to avoid the declare_intrinsic function.

However, there's still something to fix here I feel:

fnty = ir.FunctionType(double_type, [double_type, double_type])
mod.declare_intrinsic('llvm.pow', [double_type, double_type], fnty)

crashes because the name mangling isn't correct:

    mod = compile_ir(engine, llvm_ir)
  File "/home/angus/Mathics/mathics/builtin/compile/compile.py", line 38, in compile_ir
    mod.verify()
  File "/home/angus/llvmlite/llvmlite/binding/module.py", line 95, in verify
    raise RuntimeError(str(outmsg))
RuntimeError: Intrinsic name not mangled correctly for type arguments! Should be: llvm.pow.f64
double (double, double)* @llvm.pow.f64.f64

Or is the intended way to call function just exactly what's needed to match the llvm intrinsic full name? I.e.:

mod.declare_intrinsic('llvm.pow', [double_type], fnty)

In this case I can just update the docstring for declare_intrinsic.

sn6uv added a commit to sn6uv/Mathics that referenced this pull request Sep 21, 2016
sn6uv added a commit to sn6uv/Mathics that referenced this pull request Sep 21, 2016
sn6uv added a commit to sn6uv/Mathics that referenced this pull request Sep 21, 2016
@pitrou
Copy link
Contributor

pitrou commented Sep 21, 2016

is the intended way to call function just exactly what's needed to match the llvm intrinsic full name?

Right. For llvm.pow, it seems fnty needn't even be passed.

@sn6uv
Copy link
Author

sn6uv commented Sep 21, 2016

Okay, thanks @pitrou. I understand the behaviour of declare_intrinsic but I still find it slightly counter-intuitive. Regardless, I'm closing this PR.

@sn6uv sn6uv closed this Sep 21, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants