-
Notifications
You must be signed in to change notification settings - Fork 12
Native FFI #3
Comments
+1 |
Definitely would eliminate a lot of the pain in users having to compile native addons (especially on Windows). 👍 |
+1 on this. Might be worthwhile to ask FFI maintainers from Python or Ruby if there's any pitfalls we should be aware of when bringing this into core? |
@chrisdickinson agreed. We could CC them on this thread even :D |
👍 |
(y) although can we put the feature behind a flag for some time? |
is this something we should get in to V8? On Monday, March 2, 2015, Bert Belder notifications@github.com wrote:
|
How would FFI interact with V8, which is heavily template-based? I've only ever seen C FFI before. Is it even possible, or would we need that C abstraction over V8 that has been suggested several times before? |
The same way it interacts with libuv (through C code)? |
@Qard you have it backward... V8 would be interacting with FFI. I'm not aware of any implementation pitfalls, mostly because i didn't create mri or JRuby's FFI implementations. I can help with runtime pitfalls somewhat... You're going to want to give users a way to make objects backed by C memory structures to act as much as possible like any other JS object for allocation/deallocation purposes. For all other intents and purposes, interacting directly with the C call should carry C semantics. It should be up the the ffi user to implement more idomatic wrappers for their chosen library not a FFI implementation concern. |
Hi there. Do you have anything more concrete to point me to? All I see is a vague question "what are the pitfalls of incorporating a Foreign Function Interface into xxx"... and I don't know what xxx is. |
See related io.js PRs: nodejs/node#1865, nodejs/node#1762, nodejs/node#1750 |
Probably not a useful note, but since you asked me: as maintainer of Python CFFI, I feel that if the goal is to call C code, then the interface should strive for the same simplicity as C itself. Ideally, if a feature doesn't exist in C, it is not part of CFFI either (reading big- or little-endian numbers comes to mind). CFFI comes with a minimal API: the user writes ffi.cdef("some C declarations") to declare functions and types (using directly the C syntax), and then he uses ffi.new("foo_t *") or ffi.new("int[42]") to make a new C structure or array. You have a few operations on the "pointer" objects, like indexing and field access---and calls if they are pointers to functions. That's it for the core. This approach is completely different from Python's ctypes, even though it basically allows the same results. (Another difference is the "API mode", a separate mode which produces C source code that must be compiled into a Python extension module; this has the huge advantage that you're working with C code at the level of C, including stuff like macros and structures-with-at-least-a-field-called-x; not at the level of the ABI. But this might not sell so well in the context of JS, for all I know.) |
I concur. I often get requests for features based on Ruby features which often cannot be duplicated for the general case of the FFI/C layer. Where ever I make an api decision, I default to C behavior. This is an interface to a lower level language, it should act like that lower level language. You're making it easier to use a C library and remove the need for writing the library hooks in C. For Ruby, this means the resulting gem is more portable between engine implementations and even versions of the same engine. This also means the user, the one implementing a C library wrapper using FFI, does not have to learn the internals of Ruby. The user has the information required to make their api align with language semantics so, C semantics should be expected by users. |
Am I the only one skeptical about this module here? :) I don't like putting something like ffi into the core. I think it's unsafe, it will probably never be stable, and I personally consider it as a workaround, not a solution. Let's be honest here - the biggest problem in native modules development is not a C++ code, it's breaking changes in V8 that ruin all the effort and turn into maintenance nightmares. Solutions like NaN provide short term stability, but we will have soon NaN2, because the latest changes in V8 are so drastic that it cannot be simply solved by more macros and templates in NaN. Now let's go back to FFI, I have the following concerns:
Guys these are my 2 cents. I have nothing against having this library as a module, but I really think it shouldn't go to core. |
I agree! However I see FFI in core as a solution to this (not as a workaround as you suggest). If we can provide a stable FFI module that doesn't force users to go through the v8 API, it will allow us to foster a stable native ecosystem where we couldn't have before. It also reduces the C++ tyranny of native modules - to people who don't like C++ (like me), it can be very daunting to write a native module, instead allowing you to choose whichever language you prefer, like Rust.
I haven't done any benchmarking, but I can't see it being significantly slower than v8 bindings, which are already slower than regular JavaScript calls.
Yes it is easy, but it does look like it is even easier to make async calls with ffi than it is to make them with C++. I don't know about security, but that seems like it should be discussed. |
This discussion really belongs back in io.js now:
I agree with the security concerns, as does @trevnorris, which is why we pushed for isolating the potentially unsafe operations into a separate core module, |
I've moved the comments back to nodejs/node#1865, sorry - and a |
It is significantly slower than using a native module. |
I think the hardware group attempted doing something with it for serialport (or similar) but bailed because it was uber-slow, at least I have a vague memory of a recent issue about it |
It will be slower than using the native API. The big wins are: FFI in Ruby is often used as a stepping stone. A gem will be built with FFI initially, if it proves popular, the author or someone else will write native versions for the main ruby engines. Usually, the author writes a native MRI version then someone else writes a Java version for JRuby. |
With a good JIT, a FFI can be a lot faster than any alternative. This is the case about PyPy but I'm sure V8 and others are similar. The JIT can basically generate raw C calls; that's the best performance possible. By contrast, needing to go through some custom C API layer to access the objects is slower (because it needs a stable API that presents objects in a way that doesn't change all the time) and it cannot be optimized by the JIT (for example, if you make a temporary "integer" object, the JIT usually removes it; but if it escapes through the C API layer, it can't). |
@arigo Without having FFI directly in V8 I think even JIT won't help you. You will probably not be able to extract data from V8 allocated objects without doing it in C++. I remember there was a discussion in V8 about this. It wasn't strictly about FFI, but it was about accessing DOM properties directly by V8. |
A call into C++ from JS in recent V8 is a bit under 30 ns. Converting well formed arguments is in the single digit ns. So even a non trivial call to a native library going through the V8 API won't take more than 50 ns of overhead. The most expensive part will probably be the converting of return value(s) to JS objects. Which would exist in the FFI regardless. I've written tests to verify this, and even doing an operation as simple as summing values from a typed array with as few as 100 items is faster when passed to the native side. |
@arigo makes an excellent point. The key point to remember is that this requires that the FFI be tightly integrated with the JIT compiler -- not be a separate module. The JIT needs special knowledge of the FFI built in to it. This is the case for PyPy and LuaJIT. That means that this feature request really belongs with the V8 project, since only they can provide a performant FFI. LuaJIT is a good example of how fast a good FFI can be. In LuaJIT, FFI data types are used routinely, because they are faster to access than Lua tables and take up less memory. (LuaJIT is also an example of how to write a fast VM in general, but that is beside the point). |
This is a really important feature to makes nodejs to communicate with the OS and other frequently used APIs. |
With the raise of frameworks like NW.js and Electron, also increased the need to call into OS APIs, because it is often enough what most desktop apps need to do anyway. |
It has been a while, has there been any work on this? |
@matthewloring and I have been working on prototyping a fast dynamic FFI on top of TurboFan JIT in V8. We're proving it out at the moment, and once we are confident it can work, we will share more details publicly. |
Hi @ofrobots, any updates on the FFI you're working on? |
So there is no progress still? |
Would it be possible to add LuaJIT-style cdata objects?
…On Nov 29, 2017 10:17 AM, "Yonggang Luo" ***@***.***> wrote:
So there is no progress still?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#3 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AGGWB0SUjpO5w_AvjfF9pqJdPo-tjB4aks5s7XWIgaJpZM4Dn2Ab>
.
|
Essentially supporting https://github.com/node-ffi/node-ffi directly
(
require('ffi')
)The text was updated successfully, but these errors were encountered: