Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[discuss] next steps for interface definition language? #385

Closed
rfk opened this issue Feb 15, 2021 · 39 comments
Closed

[discuss] next steps for interface definition language? #385

rfk opened this issue Feb 15, 2021 · 39 comments
Labels

Comments

@rfk
Copy link
Collaborator

rfk commented Feb 15, 2021

When we started building this tool, the choice of WebIDL for interface definitions was deliberately short-term (ref ADR-0001 and we knew we'd need to revisit it someday. Now feels like a good time to start discussing what the next steps for interface definition might look like.

I'll try to do some scene-setting below, but let's consider this an open discussion thread for anyone interested in this topic to share whatever partially-baked thoughts they may have.


To begin, I want to note that I think WebIDL has worked out surprisingly well for our purposes, given that it was designed for a language ecosystem and set of constraints that are different to what we're doing here. It certainly helped with the goal of getting up and running quickly! But we have observed a few pain-points in practice (this list will be updated as we encounter more):

  • It's weird to have to use an empty namespace declaration like namespace example {} to give a namespace for the component.
  • It's weird to have to define top-level functions inside the namespace block like namespace example { my_function() }, while other interface members as defined as their own standalone items.
  • We sometimes need to put Rust-specific annotations in the .udl to help out with passing things by owned value vs by reference, which is a distinction that only matters internally to the Rust code.
  • Poor error reporting for invalid .udl files; the weedle crate seems (quite justifiably!) designed more for working with known-good .webidl files than for helping its users author new ones.
  • Poor error reporting when there is a mismatch between the .udl file and the Rust code; you basically get a gnarly error message from the generated Rust code, which is hard to act on if you don't have a good mental model of how it's generated.
  • Our syntax for enums/errors with associated data is awkward and hacky, essentially abusing method-definition syntax.
    • The syntax only supports named fields, while unnamed fields are typically more common and idiomatic both in Rust and in the target foreign languages.
  • It's possible to add comments in the .udl file, but they're not available to the parser so we can't do interesting things with them (such as auto-generating docs for the generated code).
  • We're struggling to come up with good syntax for importing types from other crates.

There are probably more pain-points, so please feel free to list additional ones below and I'll try to add them to this list.


So what should the next steps be for the "defining the component interface" part of this tool? We've discussed a few main options in the past:

  • Continue using WebIDL, investing more in the developer experience by e.g. providing better error reporting.
  • Making a custom IDL that maps more closely to the underlying concepts of a UniFFI component interface.
  • Getting rid of .udl files in favour of something with Rust macros, in the style of wasm-bindgen.

Love or hate any of those suggestions? Please share below! There may also be other approaches that we haven't noticed yet, and I'd love to hear suggestions for those as well.

Whatever solution we come up with, here's a list of the things we know we'll have to factor in to the design:

  • A better story for all or most of the pain-points listed above.
  • Don't regress any of the features for .udl, in particular default arguments.
  • Workable for both small examples and larger real-world codebases where the logic is split into multiple files.
  • Backwards compatibility for existing components that use .udl, and a good story for migrating them to the new syntax.
  • The ability to do additional code generation from the interface without having to add code to UniFFI (e.g. for additional codegen tasks in mozilla-central)

┆Issue is synchronized with this Jira Task
┆Issue Number: UNIFFI-48

@rfk
Copy link
Collaborator Author

rfk commented Feb 18, 2021

Getting rid of .udl files in favour of something with Rust macros, in the style of wasm-bindgen.

Macros are personally very interesting to me, so I've been lightly messing around with what a macro-based syntax for uniffi might look like. Some early thoughts are visible in #386 but I hope to have more considered comments on this option sometime soon.

@dmose
Copy link
Member

dmose commented Feb 23, 2021

An interesting (possibly minor) advantage of the keeping WebIDL for UDL is that it forces us to align reasonably well with real WebIDL, which is required for the current geckoJS backend implementation to work.

@tarikeshaq
Copy link
Contributor

I'mma hop on this since I saw the invitation on Element 😛 I haven't been following too closely how the tool has grown over the past few months, but..

There are a few quick naive thoughts hovering in my brain regarding taking the Marcos approach:

  • I think it's a great idea for the scaffolding to fit naturally with wasm_bindgen and other tools, will save us a lot of awkwardness. (and also keeps us within reach to WASI in case that ever becomes viable)
  • The bindings might be a pain, the idea in A little experiment with macros instead of build scripts. #386 of generating a .udl file seems like it might help, but we might need to be a little careful in the long run to make sure files are up to date (still better consistency than manually writing the idl though!)
  • I can't really get to my brain how it'll work, but I wonder if there's a scenario where we get to have no idl at all, the bindgen tool would have to somehow use the rust code directly, yuk... uhh anywho I'm just thinking out loud

I don't have too much recent context to add much more (especially on how the gecko bindings went), but I'm glad we're revisiting this, I remember defining error handling was a little forced onto the idl.
Alrighty, enough procrastination, back to school 🙃

@rfk
Copy link
Collaborator Author

rfk commented Feb 25, 2021

@tarikeshaq oh hi! Thanks for hopping in! 😁

I can't really get to my brain how it'll work, but I wonder if there's a scenario where we get to have no idl at all,
the bindgen tool would have to somehow use the rust code directly, yuk... uhh anywho I'm just thinking out loud

Well, without wanting to give away any spoilers, I actually have this working in a branch as an experiment and am surprisingly happy with the consumer experience so far; ref the fxa-client-features branch if you're curious about the details.

I hope to spend a few down cycles on polishing that up and presenting a more thorough proposal within the next week.

@tarikeshaq
Copy link
Contributor

Holy cow... syn is so powerful. I'm super curious how that will grow, I'll definitely be lurking 🙄

@rfk
Copy link
Collaborator Author

rfk commented Mar 8, 2021

As hinted above, I've been messing around with procedural macros and have the start of an idea that I quite like. I've pushed it here for early visibility:

#416

I'll write up some thoughts in the present issue for consideration, but I want to stress, I'm not wedded to any of the ideas here. I've been messing around with it because it's fun, and I think it seems promising, but there may be other better ideas to pursue here as well.

So, the idea in the above draft PR is to lean very heavily into defining the interface through a procedural macro. Here's a strawfox of what you might write in a uniffi-ed crate with this new approach:

#[uniffi_macros::declare_interface]
mod math {
  pub fn add(a: u32, b: u32) -> u32 {
    a + b
  }
};

This is a procedural macro applied to an inline sub-module, and the contents of the inline sub-module directly implement the desired component interface. There's no separate declaration of the interface, you just implement it directly in Rust code inside the decorated sub-module. When you compile this as a Rust crate, it produces the same .so artifact that you'd get out of UniFFI today, with a pub extern "C" function for add that can be linked to by the foreign language bindings.

Thinking specifically about writing the Rust code here, this approach as several advantages:

  • You only specify the interface once, right there as part of actually implementing it.
  • You don't have to have a mental model of how the interface maps onto your Rust code, it's just the Rust code.
  • If you make an error, it can be reported inline in the Rust code via Rust's excellent error mechanisms.
  • It's kind of a bit weird to have to put it in a submodule, but not too bad...

The putting-it-in-a-submodule thing is actually quite important, because it means that UniFFI can see the entire interface at once, just like it can today when parsing from a separate file. This lets us perform important consistency checks, and also lets us insert runtime footgun protection such as the "checksum" that we put into all the FFI symbol names.

When it comes time to generate the foreign-language bindings, there are a couple of options. The simplest would be to run uniffi-bindgen ./src/lib.rs rather than uniffi-bindgen ./src/module.udl - that is, to just directly parse the Rust file to discover the interface, using the same parsing code that we use to implement the macro. This works, and I've tried it out as part of #416. But I'll admit it feels kind of...weird? Like it doesn't feel right but I can't quite put my finder on why.

A different approach, which I think feels better would be to operate on the crate as a whole rather than on a specific Rust file. I'm imagining a command like cargo uniffi generate -l kotlin that would inspect the current crate, parse the Rust code to determine the component interface, then generate the corresponding kotlin bindings. Again, I can't quite put my finder on why this feels better, but it does.

So, a summary of what a possible future could look like:

$> # 1) write the code for your crate using the uniffi macros
$> cat ./src/lib.rs

#[uniffi_macros::declare_interface]
mod math {
  pub fn add(a: u32, b: u32) -> u32 {
    a + b
  }
};

$> 
$> # 2) build the crate to produce a .so containing the FFI
$> cargo build
$> ls ./target/debug | grep "\.so$"
libexample.so
$> 
$> # 3) generate foreign-language bindings from the crate
$> cargo uniffi generate -l kotlin
$> find ./target/debug -name "*.kt"
./target/debug/uniffi/example/example.kt
$>

One downside of this approach, of course, is that it's a lot of work! We'd need to build and maintain a proc macro, we'd need to figure out how to map all the bits of UniFFI interface onto corresponding Rust syntax, we'd need to figure out how to give a good developer experience when the user tries to use a bit of Rust syntax that we don't support, and so on.

My experiments in #416 have so far been pretty re-assuring in this regard. Rust's macro ecosystem and syn in particular are pretty powerful, and a first version of the above actually wouldn't feel too different to build than the thing we've been building already - basically, we'd just be mapping syn parse nodes onto bits of a ComponentInterface definition instead of mapping weedle parse nodes.

I plan to keep messing around with this approach in some background cycles. It's not urgent, but it's interesting. Any and all feedback is most welcome!

@jhugman
Copy link
Contributor

jhugman commented Mar 8, 2021

I have been holding off on this thread. I'm not sure I'm convinced by the following argument as a deal breaker for a proc_macro approach, but I wanted to put it down anyway.

I, for one, will mourn the passing of the canonical IDL file.

The IDL file is a concise description of the API that is implemented in Rust, but used by higher level language programmers: iOS, android or desktop developers. It is, or could be a lingua franca for each of the engineering stakeholders in the system.

At its best, it is the common document with which these devs with disparate skill sets can gesture at, discuss and request changes: it is a communication medium for development teams.

A mobile engineer implementing a new feature can succinctly request a new method, new attribute, new parameter, without having to understand Rust.

A mobile engineer and a desktop engineer can design an API together then find a Rust engineer to implement it.

Without it, the Rust engineer gets to mediate between the needs Desktop and iOS and Android and Rust, design the API that they won't use, and fielding feature requests written for a variety of languages.

I'll add more thoughts and reasonings in another comment.

@mhammond
Copy link
Member

mhammond commented Mar 9, 2021

I really like this, and will happily dance on the grave of the UDL file ;) I do see James's point, but the risk with the .udl that I see is that for the developers of the component it's not really canonical, but is for the consumers, which means the consumer tends to lose. Over time, ISTM that the rust developers are far more likely to add interesting notes, comments or warnings into the .rs file and not the udl file. We can already see this in nimbus - reset_telemetry_identifiers has subtly different docstrings in the udl vs the rust and it's not clear they have different audiences.

That, combined with good tooling, should mean the mobile consumer never needs to read the .rs file, just the generated documentation. Maybe that's wishful thinking though and assumes projects have great CI and tooling (ie, if mobile consumers still end up reading the .rs file instead of the generated code, we've screwed up - but that doesn't help those poor consumers)

If Jame's felt really strong about this, I guess it would be an argument for keeping the UDL generator in that PR! I'm inclined to lean against that though.

When it comes time to generate the foreign-language bindings, there are a couple of options.

That struck me as very odd too - I was going to comment in the PR but then realized it's really just the status-quo. Specifically, using gradle to generate stuff when everything is so tightly bound here seemed somehow like an opportunity for stuff to get out of sync, but I've no helpful suggestions.

@rfk
Copy link
Collaborator Author

rfk commented Mar 9, 2021

At its best, it is the common document with which these devs with disparate skill sets can gesture at,
discuss and request changes: it is a communication medium for development teams.

I like this framing at a high level, FWIW.

As a concrete example, one of the things I don't totally love about the way wasm-bindgen works is that you end up with a bunch of Rust code with some annotations sprinkled in amongst it, and it's hard to see what the actual resulting API surface will be. I do agree that there's value in being able to reason about the API surface as a concrete artifact, somewhat independent of the implementation details.

(Without expressing it in quite those concrete terms, I think this is partly what motivated my use of an internal crate to hide as many implementation details as possible over in mozilla/application-services#3876 - to make it as easy as possible to look at lib.rs and see just the exposed API surface, nothing else).

I'll chew on this a bit, as I feel like there must be a way for us to have our cake and eat it too here somewhere...

@jhugman
Copy link
Contributor

jhugman commented Mar 9, 2021

If Jame's felt really strong about this, I guess it would be an argument for keeping the UDL generator i

TBC: I'm not arguing for any particular IDL, just that the work flow include a hand-written/hand-writable IDL, and that a change in the IDL file should be reflected in the rust code (as all the bindings).

This could involve, say, a proc_macro consuming the a IDL file to generate structs, traits (instead of impls) and enums. A change in the IDL of an object (in UDL, an interface) would cause a trait to change, and a compile error in Rust to help the Rust dev find what needs doing.

How strongly do I feel about this? The upper bound is no stronger than: "I'll feel sad if we don't do this"; I'm also mindful of Rust devs who want to publish their crates on Carthage or Maven to not find this disgusting.

@mhammond
Copy link
Member

mhammond commented Mar 9, 2021

While I see your point, I remain unconvinced. IIUC, we are discussing 2 main options:

  • We can come up with sane annotations to the actual rust implementation, from which we can generate high-quality documentation.
  • We can force the maintainer of the rust component to make the same, redundant change in 2 places, and that one of those places is entirely artificial and targeted at non-rust devs, because generated docs are too simplistic, but the canonical rust code is too complex.

To put it another way - I think artificially forcing a UDL is doing a dis-service to both the rust maintainer ("even though we could generate this, it's for your own good that you must hand-write this twice") and the non-rust consumer ("we've designed this simplistic, artificial language just for you, because you wouldn't understand the alternative")

@rfk
Copy link
Collaborator Author

rfk commented Aug 9, 2021

Cross-referencing for context, we had a team work-week last week and there was a flurry of activity on the experimental macro-based PR over in #416. @badboy raised an important concern here: for an API surface of any significant complexity, it can get pretty unwieldy to have the interface and its underlying Rust code all defined in a single place in a single file. If we do want to pursue the macro route, it's clear we have some more design work to do on the details of the syntax there.

@rfk
Copy link
Collaborator Author

rfk commented Aug 9, 2021

I've updated the original issue comment with a more thorough list of the current pain-points of our .udl file syntax. Please continue to suggest others as we encounter them and I'll try to keep the list up-to-date!

@bendk
Copy link
Contributor

bendk commented Aug 9, 2021

Mark kind of hinted at this on the call last week, but what if we combined the macro approach with a separate definition file? The file would just be 1 big macro call that contained a copy of the enums, function signatures, and trait definitions, whatever was needed.

@badboy
Copy link
Member

badboy commented Aug 9, 2021

Mark kind of hinted at this on the call last week, but what if we combined the macro approach with a separate definition file? The file would just be 1 big macro call that contained a copy of the enums, function signatures, and trait definitions, whatever was needed.

What would be the advantage of that over the current UDL?

  • I understand we're reaching limitations of the WebIDL dialect there, but I fear we might have to come up with equally "hacky" approaches in Rust-like code (not every Rust API translates neatly to other languages, think default values for example).
  • I still need the implementation of the code somewhere. If one side looks like Rust, but is really just a subset of Rust this might be more confusing to developers than a completely different thing.
  • It means we need to add a significant portion of code without some much-wanted benefits.

Now don't get me wrong, I might also be more comfortable writing sort-of-Rust than I am writing UDL, so this shouldn't dismiss that idea right away.

@jplatte
Copy link
Collaborator

jplatte commented Apr 13, 2022

I have an idea for an alternative in the vein of #[uniffi_macros::declare_interface] mod foo {}, which I agree with others does not scale well:

We could copy parts of what SQLx does for its "offline mode" and have more fine-grained macros (applied to invidiual types, function and impl blocks) that analyze the item they are added to and serialize the necessary metadata to a file.

Then a tool like cargo uniffi generate can go through all of those files¹ and generate the bindings based on that.

I can dedicate some time to working on this if you think it is worth exploring.

¹ after getting rid of outdated ones by clearing the directory for them (e.g. target/uniffi) and triggering a rebuild

@bendk
Copy link
Contributor

bendk commented Apr 13, 2022

That's a really interesting suggestion. When you say "serialize the necessary metadata to a file", does that mean the macros would write to a file in the target directory?

One idea we've been floating around is using syn to parse the parts of the source tree it needs. You would point uniffi at the "root" FFI module, then if that module had a use statement, it would parse that source file and continue on. It seems like a similar system in that uniffi can find your type definitions regardless of what file they live in. What advantages/disadvantages do you see?

One issue is figuring out where the type definition is based on the use statement. The initial work might have some limitations, like it can't support inline modules or type aliases.

@jplatte
Copy link
Collaborator

jplatte commented Apr 13, 2022

does that mean the macros would write to a file in the target directory?

Many files, since multiple macros can run in parallel. We'd have to come up with a naming scheme, ideally something like target/uniffi/{item_type}-{item_name}.json. Could also include a hash, but since the item must have a unique name in the generated code too, that's probably not needed.

One idea we've been floating around is using syn to parse the parts of the source tree it needs. You would point uniffi at the "root" FFI module, then if that module had a use statement, it would parse that source file and continue on. It seems like a similar system in that uniffi can find your type definitions regardless of what file they live in. What advantages/disadvantages do you see?

I'll assume you mean mod statements there...

Advantages (of parsing the whole crate):

  • The module structure could easily be taken into account for the bindings

Disadvantages:

  • Doesn't work with conditional compilation (reimplementing #[cfg] does not seem feasible)
  • You still need (no-op?) attribute macros or some alternative (load-bearing comments? 😱) for things like deciding whether a struct should be transparent or opaque (or whatever terminology we want to use for what is currently interface vs dictionary)
  • You only get feedback on whether the interface definition is valid when running a special command, no integration with cargo check
    • This is true for my proposed solution for a smaller extent as well though: You can check that functions don't return references and such, but you can't check whether a type used in a function is itself annotated with an attribute such that it appears in the generated bindings

@mhammond
Copy link
Member

See also this comment and this comment and related discussion about how wasm-bindgen takes a somewhat similar approach but jumps through many hoops to embed this information in the binary itself.

@jplatte
Copy link
Collaborator

jplatte commented Apr 14, 2022

After reading those comments as well as https://github.com/mozilla/uniffi-rs/blob/main/docs/diplomat-and-macros.md, I'm not convinced the same issues will arise from the approach I have suggested, so I will start working on a prototype soon 🙂

@bendk
Copy link
Contributor

bendk commented Apr 14, 2022

Doesn't work with conditional compilation (reimplementing #[cfg] does not seem feasible)

This seems like a very real issue to me. The other ones I mentioned have workarounds, but I can't really think of a way for this one. Do you currently use #[cfg] blocks for your FFI code?

See also #416 (comment) and #416 (comment) and related discussion about how wasm-bindgen takes a somewhat similar approach but jumps through many hoops to embed this information in the binary itself.

How many hoops are required to embed this in the binary?

It seems like we could use basically the same system that @jplatte suggests, but instead of writing to target/uniffi/Record-MyRecord.json we would create a static, C string with the linker symbol uniffi_interface_record_MyRecord. Then instead of reading the JSON file, we would open the dylib and get the string from that.

The upside the interface metadata is always attached to the library. There's only 1 file to keep track of. There's no need to clean and rebuild.

Food for thought as you work on the prototype. Feel free go with your original plan of writing to the JSON file. It seems like a good place to start and we can iterate later.

@jplatte
Copy link
Collaborator

jplatte commented May 17, 2022

Update: I've been working on this for a while (using the JSON approach) and while I've only gotten to generate Rust scaffolding code for an argument-less unit-returning function from a #[uniffi::export] + CLI call so far which one could probably do in a day if that was the sole goal, I'm slowly wrapping my head around all of the moving parts.

One of those was how the scaffolding generation would be integrated into the Cargo build process, since you can't generate the scaffolding ahead of building the crate, because the proc-macros are called as part of that build process. The solution I came down on and discussed with @badboy yesterday is that there is going to be a separate (.gitignored) <cratename>-ffi crate that is generated by uniffi, which will depend on the main one and contain the scaffolding code.

We also discussed that it would be ideal to have FFI crate building be a regular cargo build, but this runs into issues with the separate JSON files as you need to make sure that there are no outdated (or too new?) ones. This is impossible in the face of multiple concurrent processes running the proc-macros, such as when rustc and rust-analyzer process the same project at once.
After thinking about this problem for a bit, I concluded that a binary-embedding approach like the one suggested by @bendk is probably our best bet. I also learned that the inventory crate, which does this in a very convenient way has been archived on GitHub due to being broken as of Rust 1.54. Fortunately, I also found out that the compiler bug that broke it has been fixed recently, and it should work again on nightly, soon beta, and in stable in ~6 weeks.

To avoid spending lots of time on finding the right library files and extracting symbols from them, I will attempt to implement the metadata collection using inventory next. This actually avoids JSON serialization, but there is also a small downside to it which was pointed out by @badboy which could mean that the best solution eventually is to go through JSON serialization and manual library symbol iteration: With the inventory approach, the -ffi crate has to depend on the crate containing the Rust implementation twice, once as a regular dependency and once as a build-dependency (so the build script can generate the main ffi code); this means that when you build for a target triple different from the host triple, you compile all of that code twice which is first of all just slow but could also actually influence the FFI metadata. Effectively this means that #[cfg(target_arch = "foo")] on a #[uniffi::export]ed item or #[cfg_attr(target_arch = "foo", uniffi(some_attribute))] is going to break things. However, this is not a regression since the current UDL file scheme also don't allow target-specific bits.

@arg0d
Copy link
Contributor

arg0d commented Mar 7, 2023

It sounds like macro vs .udl discussion is not going to be resolved anytime soon. In the mean time, maybe it would be worth optimizing the current .udl solution? It sounds to me that the majority of issues associated with .udl could be fixed by simply switching from WebIDL to a custom IDL. Hand writing a basic top-down parser would take a week, maybe two. Using a parser generator also seems like a sound option: https://github.com/lalrpop/lalrpop.

@jplatte
Copy link
Collaborator

jplatte commented Mar 7, 2023

It sounds like macro vs .udl discussion is not going to be resolved anytime soon.

Quite the opposite! It's pretty clear that macros are the way forward, initial support for bridging of async functions landed very recently without support for UDL and overall I get the feeling that at least half the PRs are about improving the proc-macro frontend.

The macros are also documented (albeit with a big warning about being experimental, maybe that can be toned down a little bit now). I'm not sure when I will submit the next PR to make progress on the checklist, but if you want to help, I'd be happy to write instructions for sub-tasks and review PRs.

@bendk
Copy link
Contributor

bendk commented Mar 7, 2023

This is my feeling as well. I think the macros are usable now for many use-cases and we will keep improving their usability. My goal for our Mozilla code is to switch to a proc-macro/UDL combination in the next few months and only proc-macros in maybe a year.

There was a time when I would have been all for writing a custom UDL parser, but now I feel like it's better to focus on the proc-macros. But if there's a particular reason that you want to explore custom UDL parsing, I'm open to discuss it.

@arg0d
Copy link
Contributor

arg0d commented Mar 8, 2023

I see, this is very cool to hear! Not having to duplicate API definitions in both Rust and .udl file would be really awesome. But still, it looks like it will take a while to get to that point. My main concern right now is adding documentation to generated bindings. The checklist for proc-macros does not mention documentation at all.

@jplatte
Copy link
Collaborator

jplatte commented Mar 8, 2023

I added that. It should actually be pretty straight-forward: Before the compiler expands a derive macro, it takes the /// docs and transforms them to #[doc] attributes (which have the same meaning). So all the macro has to do is look for those attributes. I think it's the same for attribute macros. To be a little more specific, given the following snippet:

/// Client for the Foo Bar API.
///
/// Lorem ipsum dolor sit amet, [...]
#[derive(uniffi::Object)]
struct FooBarClient {
    // ...
}

the proc-macro input will be

#[doc = "Client for the Foo Bar API."]
#[doc = ""]
#[doc = "Lorem ipsum dolor sit amet, [...]"]
struct FooBarClient {
    // ...
}

@praveenperera
Copy link
Contributor

I've been using the macros and they've been great. The only problem for me is I can't use a type(s) defined in the macro in the .udl file.

I still have to use the .udl file for defining callbacks. But then I end up having to write a bunch of my types in the udl file so they can be used in the callback interface definition.

Is this something that might be supported in the future? Or is it just a growing pain until more things (ex: callback interfaces) are supported in proc macros?

@jplatte
Copy link
Collaborator

jplatte commented Apr 25, 2023

I think it will be much much easier to add support for the remaining things such as callback interfaces.

@praveenperera
Copy link
Contributor

praveenperera commented Apr 25, 2023

Makes sense, the proc macros are so much nicer to use.

@bendk
Copy link
Contributor

bendk commented Apr 25, 2023

I'm hoping to work on tying up the remaining loose ends starting in a couple weeks. I think that's:

  • Callback interfaces
  • Constructors
  • Global stuff you get from an empty UDL file.

Are there any other features that need implementing?

Of course, if @jplatte or anyone else wants to implement those things, please feel free to.

@praveenperera
Copy link
Contributor

@bendk Personally the only things in my udl are the constructors, callback interfaces, and the types needed by the callback interfaces.

So your list covers all my (current) use cases.

@jplatte
Copy link
Collaborator

jplatte commented Apr 25, 2023

One more thing we currently use UDL for is declaring a few simple types from external crates. Luckily these are all crates we can modify to include (optional) UniFFI derives, it's just that the single-crate restriction needs to be lifted (it's still in place, right?).

Anyways I'd love to do constructors (just need to carve out some time for it), and would love if I didn't have to do callback interfaces 😄

@mhammond
Copy link
Member

After #1457 we can add traits to that list.

and would love if I didn't have to do callback interfaces 😄

I've still got a longer term idea that support for traits can deprecate callback interfaces - the main difference will be that currently callback interfaces generate the rust trait code and a struct implementing it - so we could kill the former and conditionally do the latter. There's some devil in the detail, but I think it's achievable and would be a good outcome.

@jplatte
Copy link
Collaborator

jplatte commented Apr 27, 2023

Object constructors are now supported (#1518, #1520)! Different to the previous solution, they now have to return Arc<Self> (or Result<Arc<Self>, E>), but I'm sure we can somehow lift that restriction and allow Arc creation in the generated scaffolding code, if anybody wants that.

@praveenperera
Copy link
Contributor

@jplatte if you're taking suggestions on what to support next in the proc macro, I would love to support for callbacks🙏

@jplatte
Copy link
Collaborator

jplatte commented Jun 4, 2023

I'm not but you're lucky, there is already a PR for that: #1573 (I'll review it in the coming week).

@praveenperera
Copy link
Contributor

Awesome thanks @bendk

@jplatte
Copy link
Collaborator

jplatte commented Jun 15, 2023

Should we close this in favor of #1257?

@badboy badboy closed this as completed Jun 15, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

10 participants