-
Notifications
You must be signed in to change notification settings - Fork 73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
resolve race condition with jar manifest manipulation #1150
Conversation
…task at the same time. The minimum required change is for the task to dependOn the configuration. The rest of the changes simplify the jar-scanning logic since we can now assume all jars are present, regardless of whether the dependency is in the same project or external.
Generate changelog in
|
task.getOptionalProductIds().set(ext.getOptionalProductDependencies()); | ||
task.getIgnoredProductIds().set(ext.getIgnoredProductDependencies()); | ||
task.getInRepoProductIds().set(provider); | ||
task.dependsOn(configProvider); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Based on reviews from @CRogers I had thought that the Provider would automatically set task dependencies. Is that not the case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question. A property/provider that is the output of task A will create a dependency if it is set to an input property of task B. However, because this provider is not associated with a task, I do not know if the same logic applies. I can say the behavior was different once I added the explicit dependsOn statement - the upstream jar tasks were run.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That might be a stupid question, but what does it actually mean to depend on a provider? The provider will be evaluated before the task runs? How is that different from evaluating the provider passed through as a task input?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
actually, the right answer to this may be that the Configuration is not listed as an Input property to the task. See -
Lines 153 to 164 in 856fba2
/** | |
* Contents of the given configuration. Cannot list the configuration itself as an input property because the | |
* caching calculations attempt to resolve the configuration at config time. This can lead to an error for | |
* configs that connot be resolved directly at that stage. Caching and up-to-date calcs thus use this property. | |
*/ | |
@Input | |
final Set<String> getProductDependenciesConfig() { | |
return productDependenciesConfig.get().getIncoming().getResolutionResult().getAllComponents().stream() | |
.map(ResolvedComponentResult::getId) | |
.map(ComponentIdentifier::getDisplayName) | |
.collect(Collectors.toSet()); | |
} |
While I didn't set things up this way here (predated me), I have encountered this issue before with in similar situations. I do not know if it is still a problem of if newer versions of gradle have some way to get around it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
More data - The task has this method to try to enforce the dependencies between the tasks as well -
Lines 106 to 130 in 856fba2
private Provider<FileCollection> otherProjectProductDependenciesTasks() { | |
return productDependenciesConfig.map(productDeps -> { | |
// Using a ConfigurableFileCollection simply because it implements Buildable and provides a convenient API | |
// to wire up task dependencies to it in a lazy way. | |
ConfigurableFileCollection emptyFileCollection = getProject().files(); | |
productDeps.getIncoming().getArtifacts().getArtifacts().stream() | |
.flatMap(artifact -> { | |
ComponentIdentifier id = artifact.getId().getComponentIdentifier(); | |
// Depend on the ConfigureProductDependenciesTask, if it exists, which will wire up the jar | |
// manifest | |
// with recommended product dependencies. | |
if (id instanceof ProjectComponentIdentifier) { | |
Project dependencyProject = getProject() | |
.getRootProject() | |
.project(((ProjectComponentIdentifier) id).getProjectPath()); | |
return Stream.of( | |
dependencyProject.getTasks().withType(ConfigureProductDependenciesTask.class)); | |
} | |
return Stream.empty(); | |
}) | |
.forEach(emptyFileCollection::builtBy); | |
return emptyFileCollection; | |
}); | |
} |
This was added with this commit -
3649f1b
I didn't change the functionality of this method when I extracted the task, but it clearly is not doing what we hoped. I have a gradle build scan where configureProductDependencies is starting after resolveProductDependencies
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just writing notes to myself - this commit was to try to avoid building jars - b855a77#diff-ffb538d33fb6230122fe54f923c3b269363d3fd26e7f1f38c58b9750aacf9e82R136-R143
It was about a month before the one that added the otherProjectProductDependenciesTasks method above. The problem with all of these is that they assume they have complete knowledge of what could write dependency info into the jar manifest (i.e. the ConfigureProductDependenciesTask). Though I still don't know why I have seen the task ordering difference noted above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Discussed offline and we'll move forward with this for now. If the overhead by compiling jars becomes too bad, we can look into other options.
...s-packaging/src/main/java/com/palantir/gradle/dist/tasks/ResolveProductDependenciesTask.java
Show resolved
Hide resolved
...s-packaging/src/main/java/com/palantir/gradle/dist/tasks/ResolveProductDependenciesTask.java
Outdated
Show resolved
Hide resolved
@@ -185,58 +185,6 @@ class CreateManifestIntegrationSpec extends GradleIntegrationSpec { | |||
file('bar-server/product-dependencies.lock').text.contains 'com.palantir.group:foo-service ($projectVersion, 1.x.x)' | |||
} | |||
|
|||
def "createManifest does not force compilation of sibling projects"() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree with Dan. Is there another solution? Maybe using variants?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had suggested using variants and artifactTransformers offline. I think a solution similar to what we do for diagnostics could work here.
The only issue I see is that it is more challenging to get access to a manifest than to a file in the resource directory. I think we could work around this though by producing both an entry in the JARs manifest (for backwards compatability) and a file in the resources directory that contained the Pdeps of a given JAR. We could then cleanly re-use the existing transforms (https://github.com/palantir/sls-packaging/blob/develop/gradle-sls-packaging/src/main/java/com/palantir/gradle/dist/service/DiagnosticsManifestPlugin.java#L165-L233) and leverage regular gradle cache validation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe that writing the information to resource files is the way to go (it was the original design for the additional conjure information which is now being written), though that won't necessarily clear all of this up, especially if we are also writing to the meta-inf file (because we still have the existing problem).
However, there may be an optimization (a.k.a. hack) we can appy for in-project dependencies for lock files. We do not need to actually compute the minimum version for such when writing lock files - we always write the template variable '$projectVersion'. So if we think of updating locks separately, then we may be able to avoid compiling and writing jars. I'll ponder a bit more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like Felipe's idea of also producing a pdeps artifact in addition to writing it in the JAR manifest as this would potentially simplify configuration.
though that won't necessarily clear all of this up, especially if we are also writing to the meta-inf file (because we still have the existing problem).
Wouldn't project dependencies use the same version of sls-packaging though so we could rely on them using the new code path and producing an artifact containing the pdeps that does not require compiling the source?
We do not need to actually compute the minimum version for such when writing lock files - we always write the template variable '$projectVersion'
Maybe I misunderstand but I think we need to know the concrete minimum version for writing the "product-dependencies.lock" file (ref)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in-project dependencies are always changed to the template (ref), unless there is some comparison I am missing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need to revert this. After this PR, excavator checks will not open PRs if the resulting code does not compile.
Here's an example of a dependency upgrade excavator run on our internal server repo that failed because compileJava
was run as part of --write-locks
. This run did not open a PR.
https://e.p.b/api/coordinator/checkoutput/c2vrbesi5esaano38i40.stderr
#1154) removes change to depend directly on configurations
The minimum required change is for the task to dependOn the configuration. The rest of the
changes simplify the jar-scanning logic since we can now assume all jars are present, regardless
of whether the dependency is in the same project or external.
Before this PR
The ResolveProductDependenciesTask can run before or during tasks that modify the contents of the Manifest produced by the Jar task of dependent projects. This can lead to ConcurrentModificationExceptions. This was a potential problem even when the logic was in the CreateManifest task. It is unclear why it did not manifest.
After this PR
==COMMIT_MSG==
resolve race condition with jar manifest manipulation
==COMMIT_MSG==
Possible downsides?
Adding the dependency on the configuration does mean that incoming dependencies will be resolved (i.e. the jars built). That is true regardless of whether the logic to change how the artifact is scanned is changed or not. This is normally not an issue since createManifest is generally triggered by the dist target and everything needs to be built anyway. It is different when only --write-locks is run and there are no recommended product dependencies. We may want to find an optimization for that case if it becomes significant.