-
Notifications
You must be signed in to change notification settings - Fork 278
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move from manual javac to java_common.compile #164
Comments
I think we will have re-architect the rules or have one output. We at least have to combine to make the _deploy.jar. But if we emit two jars then targets that consume this one will have to add both jars to their inputs (since the whole idea is to use this when there are circular dependencies). We already have jar creation code, we can just make it work for jar concatenation without unjarring first. It's slightly more IO but I bet not st all the slow part compared to scalac. |
the problem with joining the jars is that currently this happens in the worker and the java_common will happen at skylark level.
I think #1 sounds least expensive performance wise since IINM this problem isn't jvm related but just archive manipulation. |
The problem with bash is that you have to jump through all kinds of hoops
to make it a reproducible jar (same order contents, no timestamps).
It would be nice to have a C++ implementation or jar in bazel that did the
right thing.
…On Thu, Mar 9, 2017 at 06:25 Ittai Zeidman ***@***.***> wrote:
the problem with joining the jars is that currently this happens in the
worker and the java_common will happen at skylark level.
AFAIU we have two options:
1. Do the jar concatenation in bash on skylark
2. Change the worker to be able to get two types of requests and add a
concat jars request type
3. Call JarCreator (or a variant) plainly without a worker after
java_common and start a new jvm.
I think #1 <#1> sounds
least expensive performance wise since IINM this problem isn't jvm related
but just archive manipulation.
WDYT?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#164 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAEJdsZzBp9hG4I6so4Dto0lr8C9ZxRyks5rkCfygaJpZM4MYE9b>
.
|
Right...
So wdyt about having the worker support two type of commands?
On Thu, 9 Mar 2017 at 18:38 P. Oscar Boykin <notifications@github.com>
wrote:
… The problem with bash is that you have to jump through all kinds of hoops
to make it a reproducible jar (same order contents, no timestamps).
It would be nice to have a C++ implementation or jar in bazel that did the
right thing.
On Thu, Mar 9, 2017 at 06:25 Ittai Zeidman ***@***.***>
wrote:
> the problem with joining the jars is that currently this happens in the
> worker and the java_common will happen at skylark level.
> AFAIU we have two options:
>
> 1. Do the jar concatenation in bash on skylark
> 2. Change the worker to be able to get two types of requests and add a
> concat jars request type
> 3. Call JarCreator (or a variant) plainly without a worker after
> java_common and start a new jvm.
>
> I think #1 <#1> sounds
> least expensive performance wise since IINM this problem isn't jvm
related
> but just archive manipulation.
> WDYT?
>
> —
> You are receiving this because you commented.
>
>
> Reply to this email directly, view it on GitHub
> <
#164 (comment)
>,
> or mute the thread
> <
https://github.com/notifications/unsubscribe-auth/AAEJdsZzBp9hG4I6so4Dto0lr8C9ZxRyks5rkCfygaJpZM4MYE9b
>
> .
>
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#164 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABUIF_thWbYDqxvKD6L6Bf4LGmAADgL_ks5rkCsSgaJpZM4MYE9b>
.
|
I think that could be fine. Since it is still related to compiling scala. So, we could add some optional args to the scalac worker to just act as a jar concatenator. |
Ok, any tips on how I should do that? Never worked with protobuf and the
current worker protocol feels a bit implicit to me.
…On Thu, 9 Mar 2017 at 19:17 P. Oscar Boykin ***@***.***> wrote:
I think that could be fine. Since it is still related to compiling scala.
So, we could add some optional args to the scalac worker to just act as a
jar concatenator.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#164 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABUIF_TWYmWcvWyysTGBaYD1H2FXBYZNks5rkDQrgaJpZM4MYE9b>
.
|
you can look at the current invocation code. We have abstracted all the
protobuf out. You get a list of arguments that we then decode to pass to
scalac.
You will just need to add a new argument to our key-value approach.
On Thu, Mar 9, 2017 at 8:14 AM Ittai Zeidman <notifications@github.com>
wrote:
… Ok, any tips on how I should do that? Never worked with protobuf and the
current worker protocol feels a bit implicit to me.
On Thu, 9 Mar 2017 at 19:17 P. Oscar Boykin ***@***.***>
wrote:
> I think that could be fine. Since it is still related to compiling scala.
>
> So, we could add some optional args to the scalac worker to just act as a
> jar concatenator.
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <
#164 (comment)
>,
> or mute the thread
> <
https://github.com/notifications/unsubscribe-auth/ABUIF_TWYmWcvWyysTGBaYD1H2FXBYZNks5rkDQrgaJpZM4MYE9b
>
> .
>
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#164 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAEJdh2fg5DZjugfC-XkBpcziokPtYPNks5rkEF4gaJpZM4MYE9b>
.
|
How far along are you with this approach? Basically, I really need the java-scala sandwich so I was contemplating splitting this work in half. One step just uses java_common.create_provider on the current scalac (invoking javac) output and emits a java provider as well as/instead the current scala provider. It is this step that I am tempted to work on (basically immediately unless you are well-progressed on this part). The other step is moving the javac use inside scalac to be java_common.compile with the merge step proposed here. |
I don't think anyone has started, but I think everything in bazel 0.4.5 is ready to do what we need. I don't think the calling javac is the main issue, if you ask me. The current situation is not bad there in my view. The bigger issue is making scala libraries that java libraries can depend on, which again, 0.4.5 should support. |
I was going to start today but you can take it. No problem.
…On Tue, 28 Mar 2017 at 2:40 Stephen Twigg ***@***.***> wrote:
How far along are you with this approach?
Basically, I really need the java-scala sandwich so I was contemplating
splitting this work in half. One step just uses java_common.create_provider
on the current scalac (invoking javac) output and emits a java provider as
well as/instead the current scala provider. It is this step that I am
tempted to work on (basically immediately unless you are well-progressed on
this part).
The other step is moving the javac use inside scalac to be
java_common.compile with the merge step proposed here.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#164 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABUIF0_Q1EYO_HFc9qrum5_BaIpVKJ-Zks5rqEj2gaJpZM4MYE9b>
.
|
What do you all think between scala_library emitting both a scala provider and java provider (and then preferring the scala_provider when available) versus just emitting a java provider. Main thought is that the latter is simpler is simpler overall; however java_common.create_provider only has two inputs: compile and runtime jars. But, then the java provider can present transitive_deps, transitive_exports, transitive_runtime_deps, transitive_source_jars so need to trace how it builds all that information. As a first pass, my plan was to do the compile based on the ijar from all dependencies. Then, the provider is created with compile jars = generated ijar (or real jar for macros) + transitive_exports only plus runtime jars = real jar + runtime jars of all deps. PS: I may also dig through the Skylark/Bazel sources to get details on where it is actually storing this information from java_library. A cursory look indicates that some experimentation is likely needed. For example, transitive_exports looks like it is actually tracing up through rules that define "exports" as an attr. (See JavaExportsProvider.) Thus, it may be possible to skip adding transitive_exports to compile jars.... |
What do we need in a scala provider not in the java provider? It seems to me if we can just use the java provider we are dogfooding more, since all builds will exercise the same code path, not just java -> scala dependencies. I assumed that the for the provider we pass only our direct compile and runtime jars and internally it builds up the transitive deps. The exports are an issue, though it sounds like. We really need to have exports support. source_jars don't make quite as much sense, since clearly we are not giving java sources, but we can create a scala source jar (though I don't know why one needs those). Anyway, as a first pass, scala and java providers can be fine I think if we can't support exports. If we can't do what we want to do, we should probably file issues/send PRs to fix sooner than later. |
Won't we lose `extra_information` if we only use the java provider?
…On Wed, Mar 29, 2017 at 1:59 AM P. Oscar Boykin ***@***.***> wrote:
What do we need in a scala provider not in the java provider? It seems to
me if we can just use the java provider we are dogfooding more, since all
builds will exercise the same code path, not just java -> scala
dependencies.
I *assumed* that the for the provider we pass only our direct compile and
runtime jars and internally it builds up the transitive deps. The exports
are an issue, though it sounds like. We really need to have exports
support. source_jars don't make quite as much sense, since clearly we are
not giving java sources, but we can create a scala source jar (though I
don't know why one needs those).
Anyway, as a first pass, scala and java providers can be fine I think if
we can't support exports. If we can't do what we want to do, we should
probably file issues/send PRs to fix sooner than later.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#164 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABUIF-ue0Y_HCUxyIaUq0wmAEVb7I5HEks5rqZC4gaJpZM4MYE9b>
.
|
I don't think so. The provider is one item in the return struct. See here: |
We will need to retain the scala provider for now because java_common.create_provider (and accessing target[java_common.provider]) return a JavaProvider, which is essentially a blackbox that only internal rules and actions can make full use of. To custom Skylark rules, it only exposes some default fields and transitive_runtime_deps. For reference, .java actually returns a JavaSkylarkApiProvider, which can only be constructed by native rules (and they usually do so in parallel with a JavaProvider). I think JavaProvider is still under some churn since it contains a set of providers that are a subset of the providers in JavaSkylarkApiProvider. For example, exports are in JavaSkylarkApiProvider but not JavaProvider, so we will need to do the previously discussed faking of them. It would be nice if JavaProvider took a superset of the data JavaSkylarkApiProvider took and then any rule (built-in or Skylark) would automatically add a java=JavaSkylarkApiProvider to any other rule that has a JavaProvider declaredProvider. PS: Amusingly, it looks like java.transitive_deps actually already includes java.transitive_exports even though java_library claims an export isn't necessarily included as a dep for that library's compilation. PPS: Sorry this took longer than projected. Beyond standard work 'distractions', spent a bit of time on the jdk issue and doing a post-mortem. Also, wanted to spend some time sort-of-deeply looking into the bazel internals to see what certain fields were actually doing. |
Upon deeper analysis of the java rules implementation in Bazel (as well as tests), we will be able to 'sandwich' for a deps|runtime_deps attribute in java_library, etc. but not for exports. This is because deps is configured to reject if both the dep rule type is not in a whitelist and the dep does not provide a JavaProvider. However, exports is configured to simply reject any dep not in the whitelist. Also, the current use of transitive_deps for java in collect_jars is technically too permissive, since it is capturing that target (good), that target's transitive exports (good), and that target's transitive dependencies (bad). Unfortunately, transitive_exports just returns a set of labels (which are slightly fancy strings). It might be possible to get the transitive_exports as artifacts by taking transitive_deps and filtering out all artifacts whose owner (generating label) is not in transitive_exports. However, it is supposedly possible for owner to be 'blank' but maybe it is only blank for when it is just a file. Fortunately (as an amusing interaction of restrictions), exports is further restricted to ONLY be rule labels (whereas deps could be files like a deploy jar) so it is possible that one can assume anything in transitive_deps that is coming from an export must have been from a generating rule! PS: Unsure where else to have this discussion but it is somewhat off topic. I am mostly trying to point out relevant issues for you all to consider/debate while I experiment with this.... Also, note that this is all subject to change as they refine the proposal in the sandwich blog. create_provider isn't documented yet so it is possible there will still be some iteration. (Just looking at the internals indicates there is a lot of potential for simplification/removal of redundancies....) |
First of all I think this is a relevant place to have this discussion.
I think that maybe issue #970 in bazel itself will be an even more relevant
place since Iirina who is working on this issue is monitoring the issue and
needs feedback on what we are missing to continue forward.
I think we should retain the scala provider for now and see if we can get
the sandwich to work with the current API. Following that we should give
her the feedback of what we need and why.
…On Thu, 30 Mar 2017 at 7:45 Stephen Twigg ***@***.***> wrote:
Upon deeper analysis of the java rules implementation in Bazel (as well as
tests), we will be able to 'sandwich' for a deps|runtime_deps attribute in
java_library, etc. but not for exports. This is because deps is configured
to reject if both the dep rule type is not in a whitelist and the dep does
not provide a JavaProvider. However, exports is configured to simply reject
any dep not in the whitelist.
Also, the current use of transitive_deps for java in collect_jars is
technically too permissive, since it is capturing that target (good), that
target's transitive exports (good), and that target's transitive
dependencies (bad). Unfortunately, transitive_exports just returns a set of
labels (which are slightly fancy strings). It might be possible to get the
transitive_exports as artifacts by taking transitive_deps and filtering out
all artifacts whose owner (generating label) is not in transitive_exports.
However, it is supposedly possible for owner to be 'blank' but maybe it is
only blank for when it is just a file. Fortunately (as an amusing
interaction of restrictions), exports is further restricted to ONLY be rule
labels (whereas deps could be files like a deploy jar) so it is possible
that one can assume anything in transitive_deps that is coming from an
export must have been from a generating rule!
PS: Unsure where else to have this discussion but it is somewhat off
topic. I am mostly trying to point out relevant issues for you all to
consider/debate while I experiment with this.... Also, note that this is
all subject to change as they refine the proposal in the sandwich blog.
create_provider isn't documented yet so it is possible there will still be
some iteration. (Just looking at the internals indicates there is a lot of
potential for simplification/removal of redundancies....)
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#164 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABUIF5sMlRjg9Hn8Zy2C5zeutJV9h3jOks5rqzNjgaJpZM4MYE9b>
.
|
Thanks for he update. Indeed 970 and raising issues we find seems
important. Looks like we are not quite as close as I had hoped to a
solution here.
On Wed, Mar 29, 2017 at 18:55 Ittai Zeidman <notifications@github.com>
wrote:
… First of all I think this is a relevant place to have this discussion.
I think that maybe issue #970 in bazel itself will be an even more relevant
place since Iirina who is working on this issue is monitoring the issue and
needs feedback on what we are missing to continue forward.
I think we should retain the scala provider for now and see if we can get
the sandwich to work with the current API. Following that we should give
her the feedback of what we need and why.
On Thu, 30 Mar 2017 at 7:45 Stephen Twigg ***@***.***>
wrote:
> Upon deeper analysis of the java rules implementation in Bazel (as well
as
> tests), we will be able to 'sandwich' for a deps|runtime_deps attribute
in
> java_library, etc. but not for exports. This is because deps is
configured
> to reject if both the dep rule type is not in a whitelist and the dep
does
> not provide a JavaProvider. However, exports is configured to simply
reject
> any dep not in the whitelist.
>
> Also, the current use of transitive_deps for java in collect_jars is
> technically too permissive, since it is capturing that target (good),
that
> target's transitive exports (good), and that target's transitive
> dependencies (bad). Unfortunately, transitive_exports just returns a set
of
> labels (which are slightly fancy strings). It might be possible to get
the
> transitive_exports as artifacts by taking transitive_deps and filtering
out
> all artifacts whose owner (generating label) is not in
transitive_exports.
> However, it is supposedly possible for owner to be 'blank' but maybe it
is
> only blank for when it is just a file. Fortunately (as an amusing
> interaction of restrictions), exports is further restricted to ONLY be
rule
> labels (whereas deps could be files like a deploy jar) so it is possible
> that one can assume anything in transitive_deps that is coming from an
> export must have been from a generating rule!
>
> PS: Unsure where else to have this discussion but it is somewhat off
> topic. I am mostly trying to point out relevant issues for you all to
> consider/debate while I experiment with this.... Also, note that this is
> all subject to change as they refine the proposal in the sandwich blog.
> create_provider isn't documented yet so it is possible there will still
be
> some iteration. (Just looking at the internals indicates there is a lot
of
> potential for simplification/removal of redundancies....)
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <
#164 (comment)
>,
> or mute the thread
> <
https://github.com/notifications/unsubscribe-auth/ABUIF5sMlRjg9Hn8Zy2C5zeutJV9h3jOks5rqzNjgaJpZM4MYE9b
>
> .
>
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#164 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAEJdvKzyl0zrZz-IcCaAsgHpUR_QjqTks5rqzXTgaJpZM4MYE9b>
.
|
I think the Bazel People are strongly committed to closing it so once we
have concrete issues I'm sure they will be handled pretty fast.
On Thu, 30 Mar 2017 at 8:01 P. Oscar Boykin <notifications@github.com>
wrote:
… Thanks for he update. Indeed 970 and raising issues we find seems
important. Looks like we are not quite as close as I had hoped to a
solution here.
On Wed, Mar 29, 2017 at 18:55 Ittai Zeidman ***@***.***>
wrote:
> First of all I think this is a relevant place to have this discussion.
> I think that maybe issue #970 in bazel itself will be an even more
relevant
> place since Iirina who is working on this issue is monitoring the issue
and
> needs feedback on what we are missing to continue forward.
>
> I think we should retain the scala provider for now and see if we can get
> the sandwich to work with the current API. Following that we should give
> her the feedback of what we need and why.
> On Thu, 30 Mar 2017 at 7:45 Stephen Twigg ***@***.***>
> wrote:
>
> > Upon deeper analysis of the java rules implementation in Bazel (as well
> as
> > tests), we will be able to 'sandwich' for a deps|runtime_deps attribute
> in
> > java_library, etc. but not for exports. This is because deps is
> configured
> > to reject if both the dep rule type is not in a whitelist and the dep
> does
> > not provide a JavaProvider. However, exports is configured to simply
> reject
> > any dep not in the whitelist.
> >
> > Also, the current use of transitive_deps for java in collect_jars is
> > technically too permissive, since it is capturing that target (good),
> that
> > target's transitive exports (good), and that target's transitive
> > dependencies (bad). Unfortunately, transitive_exports just returns a
set
> of
> > labels (which are slightly fancy strings). It might be possible to get
> the
> > transitive_exports as artifacts by taking transitive_deps and filtering
> out
> > all artifacts whose owner (generating label) is not in
> transitive_exports.
> > However, it is supposedly possible for owner to be 'blank' but maybe it
> is
> > only blank for when it is just a file. Fortunately (as an amusing
> > interaction of restrictions), exports is further restricted to ONLY be
> rule
> > labels (whereas deps could be files like a deploy jar) so it is
possible
> > that one can assume anything in transitive_deps that is coming from an
> > export must have been from a generating rule!
> >
> > PS: Unsure where else to have this discussion but it is somewhat off
> > topic. I am mostly trying to point out relevant issues for you all to
> > consider/debate while I experiment with this.... Also, note that this
is
> > all subject to change as they refine the proposal in the sandwich blog.
> > create_provider isn't documented yet so it is possible there will still
> be
> > some iteration. (Just looking at the internals indicates there is a lot
> of
> > potential for simplification/removal of redundancies....)
> >
> > —
> > You are receiving this because you authored the thread.
> > Reply to this email directly, view it on GitHub
> > <
>
#164 (comment)
> >,
> > or mute the thread
> > <
>
https://github.com/notifications/unsubscribe-auth/ABUIF5sMlRjg9Hn8Zy2C5zeutJV9h3jOks5rqzNjgaJpZM4MYE9b
> >
> > .
> >
>
> —
> You are receiving this because you commented.
>
>
> Reply to this email directly, view it on GitHub
> <
#164 (comment)
>,
> or mute the thread
> <
https://github.com/notifications/unsubscribe-auth/AAEJdvKzyl0zrZz-IcCaAsgHpUR_QjqTks5rqzXTgaJpZM4MYE9b
>
> .
>
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#164 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABUIF27RZCjXGWTmURcvgcD9WxcN-as3ks5rqzc4gaJpZM4MYE9b>
.
|
On the plus side, after typing that out and doing the analysis both here and there, I have a much stronger idea for how to execute the scala build rules using JavaProvider.... |
👍🏾
Correct me if I'm wrong but I think our game plan is:
1. Change the rules to return both java and scala providers.
2. Move from manual javac to java_common.compile and merge the previous
provider with the new one.
3. Try to see what is left to drop the scala provider in favor of only the
java provider.
Wdyt?
…On Thu, 30 Mar 2017 at 11:29 Stephen Twigg ***@***.***> wrote:
On the plus side, after typing that out and doing the analysis both here
and there, I have a much stronger idea for how to execute the scala build
rules using JavaProvider....
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#164 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABUIF9xIoKCFguOtrFai2sE_E8vpibZ5ks5rq2f5gaJpZM4MYE9b>
.
|
That sounds about right. The listing of items was targeted mainly at 3, although you will not yet be able to say a scala_library is an exports of a java_library. |
Sounds like a plan.
…On Wed, Mar 29, 2017 at 23:28 Stephen Twigg ***@***.***> wrote:
That sounds about right. The listing of items was targeted mainly at 3,
although you will not yet be able to say a scala_library is an exports of a
java_library.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#164 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAEJdkPL5RIS7MZ8oCife8grcSke5fNfks5rq3WhgaJpZM4MYE9b>
.
|
This is going well. As part of it, I have been trying to reforge the scala provider to look more like the java provider (partly as a projection towards the future and partly so I/you don't have to maintain a mental distinction). Basically, the attribute has 3 fields:
I am going to present (without proof but can be supplied if possible) that this structure very closely micmics how native.java_library uses its JavaProvider. (The fields in JavaSkylarkApiProvider are a red-herring and only made available to Skylark but not used by native rules.) A key takeaway is that java_library does not distinguish between jars from itself and jars from exports when sending stuff to the next rule. The transition has been fine except in twitter_scrooge.bzl: Since the new scala provider (and eventually JavaProvider) just has compile_jars, it doesn't differentiate between the ijar and jars added to compile from exports. Thus, the compile_jars from a gen_scrooge_srcjar now would include the ijars from its direct dependencies. Is this OK? (I am not totally familiar with what twitter_scrooge is doing and thus unclear with what makes sense to expose to its dependents). Note that runtime jar propagation will not change at all and still acts perfectly fine. There are a couple of workarounds if old behavior is needed (up to including adding a separate field to the attr just for this case). Although, the old behavior did seem odd where it was both exporting its dependencies at all and then further only exporting parts (the dependencies exports). |
As a quick amendment, I did some digging and found that scrooge_scala_srcjar is immediately used in the macro scrooge_scala_library. The scrooge_scala_library was then relying on the original deps and also the result of scrooge_scala_srcjar. In fact, the changes would allow us to drop the dependency on deps in the scala_library, since scrooge_scala_srcjar would already include them. Regardless, after futher thinking (before the scrooge_scala_library discovery), there are many ways to restore the old behavior if desired. The fastest is to put exports into the scala provider: it would just be the union of all compile_jars from exports (and not be used by any other rules). A 'better' solution might be to exploit aspects to 'cheat' and allow scrooge_scala_srcjar to grab the exports from its direct dependencies. This would avoid needing a one-off field in the scala provider (and allow a java_library to work/provide exports when in deps as well). |
Scrooge is really important (critical currently) for us at Stripe. That said, I am very happy to make is more idiomatic with java (and scala) rules in bazel if we can. There are many current things that are probably just implementation details. As long as the tests pass, we are probably fine. Our use case is generally to make a thrift_library which is just a bundle of thrifts with some meta-data and then use the scrooge_scala_library to generate the compiled code. In fact, srcjar support was added to the scala rules explicitly for scrooge. |
@johnynek as long as you are using scrooge_scala_library and not relying on peculiar behaviors of specifically gen_scrooge_scala_srcjar, then you should see no difference as the scala_library of scrooge_scala_library was already masking all the weird things gen_scrooge_scala_srcjar was doing with the scala provider. |
Please see sdtwigg@9c8f720 That commit works with bazel test -- ... -test_expect_failure/.., and bash test_run.sh That said, if any of you want to test drive that commit on any projects to see if it breaks anything, go right ahead (as that may help accelerate my debugging). Keep in mind that with that commit a scala_library can be a dependency for deps and runtime_deps of java_library but have https://bazel-review.googlesource.com/#/c/9671/ out to get it to work on everything else. |
awesome! |
No, there are so a couple cleaning up various odds and ends but that is one
of the two key ones.
…On Apr 5, 2017 1:54 AM, "Ittai Zeidman" ***@***.***> wrote:
awesome!
is this gerrit change the last one pending? I saw you had several CLs
active
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#164 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AA9_-PbqvWgfDlgAOOPRfPKeBkLn0Cceks5rs1bEgaJpZM4MYE9b>
.
|
Thanks for the update @sdtwigg. |
Just to add another motivation to this issue apart from rule hygiene- I just spent a long time trying to debug a "problem" with rules_scala only to find that my simple java code didn't compile (was missing an import). Since intellij doesn't support scala I use sublime for my rules_scala development and I can miss some trivial things like a missing import. The current error messages are really not helpful and just say an error occurred with javac. |
ok just hit on an actual blocking motivation- strict_scala_deps is unusable for us since javac fails with the trimmed classpath. |
Hi,
I'm trying to tackle moving from a manual
javac
tojava_common.compile
as discussed in bazelbuild/bazel#970 (comment).Should we just return two jars? If so any preference to the names? Do you think anyone counts on it being one jar? We can unzip and jar both of them back together but it's messy.
Other than that it looks pretty straightforward to me by just obliterating all javac references, calling
java_common.compile
at the end of_compile(
and propagate the providers outThe text was updated successfully, but these errors were encountered: