Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[main] Upgrade to latest dependencies #817

Conversation

knative-automation
Copy link
Contributor

@knative-automation knative-automation commented Apr 19, 2021

Cron. -knative-prow-robot

/cc knative-sandbox/channel-wg-leads
/assign knative-sandbox/channel-wg-leads

Produced via:

./hack/update-deps.sh --upgrade --release v0.23
./hack/update-codegen.sh

Details:

bumping knative.dev/eventing e745733...97c0d30:
  > 97c0d30 upgrade to latest dependencies (# 5263)
bumping knative.dev/pkg b51994e...b80a192:
  > b80a192 Improve state reporting in kube checks (# 2082)

@knative-automation knative-automation requested review from a team as code owners April 19, 2021 02:00
@knative-automation knative-automation requested a review from a user April 19, 2021 02:00
@google-cla google-cla bot added the cla: yes Indicates the PR's author has signed the CLA. label Apr 19, 2021
@knative-prow-robot knative-prow-robot added the size/M Denotes a PR that changes 30-99 lines, ignoring generated files. label Apr 19, 2021
@codecov
Copy link

codecov bot commented Apr 19, 2021

Codecov Report

Merging #817 (8990415) into main (596839b) will not change coverage.
The diff coverage is n/a.

Impacted file tree graph

@@            Coverage Diff            @@
##               main     #817   +/-   ##
=========================================
  Coverage     76.53%   76.53%           
  Complexity      439      439           
=========================================
  Files            77       77           
  Lines          2924     2924           
  Branches        133      133           
=========================================
  Hits           2238     2238           
  Misses          535      535           
  Partials        151      151           
Flag Coverage Δ Complexity Δ
java-unittests 80.20% <ø> (ø) 439.00 <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.


Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 596839b...8990415. Read the comment docs.

bumping knative.dev/eventing e745733...97c0d30:
  > 97c0d30 upgrade to latest dependencies (# 5263)
bumping knative.dev/pkg b51994e...b80a192:
  > b80a192 Improve state reporting in kube checks (# 2082)

Signed-off-by: Knative Automation <automation@knative.team>
@knative-automation knative-automation force-pushed the auto-updates/update-deps-main branch from 01219e8 to 8990415 Compare April 19, 2021 13:44
@knative-prow-robot knative-prow-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Apr 19, 2021
@knative-prow-robot knative-prow-robot added the lgtm Indicates that a PR is ready to be merged. label Apr 19, 2021
@knative-prow-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: pierDipi

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@knative-prow-robot knative-prow-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Apr 19, 2021
@knative-prow-robot knative-prow-robot merged commit f580898 into knative-extensions:main Apr 19, 2021
Cali0707 pushed a commit to Cali0707/eventing-kafka-broker that referenced this pull request Sep 14, 2023
…ions#817)

* Cache init offsets results

When there is high load and multiple consumer group schedule calls,
we get many `dial tcp 10.130.4.8:9092: i/o timeout` errors when
trying to connect to Kafka.
This leads to increased "time to readiness" for consumer groups.

The downside of caching is that, in the case, partitions
increase while the result is cached we won't initialize
the offsets of the new partitions.

Signed-off-by: Pierangelo Di Pilato <pierdipi@redhat.com>

* Add autoscaler leader log patch

Signed-off-by: Pierangelo Di Pilato <pierdipi@redhat.com>

---------

Signed-off-by: Pierangelo Di Pilato <pierdipi@redhat.com>
Co-authored-by: Pierangelo Di Pilato <pierdipi@redhat.com>
pierDipi added a commit to pierDipi/eventing-kafka-broker that referenced this pull request Sep 20, 2023
…ions#817)

* Cache init offsets results

When there is high load and multiple consumer group schedule calls,
we get many `dial tcp 10.130.4.8:9092: i/o timeout` errors when
trying to connect to Kafka.
This leads to increased "time to readiness" for consumer groups.

The downside of caching is that, in the case, partitions
increase while the result is cached we won't initialize
the offsets of the new partitions.

Signed-off-by: Pierangelo Di Pilato <pierdipi@redhat.com>

* Add autoscaler leader log patch

Signed-off-by: Pierangelo Di Pilato <pierdipi@redhat.com>

---------

Signed-off-by: Pierangelo Di Pilato <pierdipi@redhat.com>
Co-authored-by: Pierangelo Di Pilato <pierdipi@redhat.com>
pierDipi pushed a commit to pierDipi/eventing-kafka-broker that referenced this pull request Sep 20, 2023
When there is high load and multiple consumer group schedule calls,
we get many `dial tcp 10.130.4.8:9092: i/o timeout` errors when
trying to connect to Kafka.
This leads to increased "time to readiness" for consumer groups.

The downside of caching is that, in the case, partitions
increase while the result is cached we won't initialize
the offsets of the new partitions.

Signed-off-by: Pierangelo Di Pilato <pierdipi@redhat.com>
pierDipi pushed a commit to pierDipi/eventing-kafka-broker that referenced this pull request Sep 20, 2023
When there is high load and multiple consumer group schedule calls,
we get many `dial tcp 10.130.4.8:9092: i/o timeout` errors when
trying to connect to Kafka.
This leads to increased "time to readiness" for consumer groups.

The downside of caching is that, in the case, partitions
increase while the result is cached we won't initialize
the offsets of the new partitions.

Signed-off-by: Pierangelo Di Pilato <pierdipi@redhat.com>
pierDipi pushed a commit to pierDipi/eventing-kafka-broker that referenced this pull request Sep 20, 2023
When there is high load and multiple consumer group schedule calls,
we get many `dial tcp 10.130.4.8:9092: i/o timeout` errors when
trying to connect to Kafka.
This leads to increased "time to readiness" for consumer groups.

The downside of caching is that, in the case, partitions
increase while the result is cached we won't initialize
the offsets of the new partitions.

Signed-off-by: Pierangelo Di Pilato <pierdipi@redhat.com>
pierDipi added a commit to pierDipi/eventing-kafka-broker that referenced this pull request Sep 20, 2023
When there is high load and multiple consumer group schedule calls,
we get many `dial tcp 10.130.4.8:9092: i/o timeout` errors when
trying to connect to Kafka.
This leads to increased "time to readiness" for consumer groups.

The downside of caching is that, in the case, partitions
increase while the result is cached we won't initialize
the offsets of the new partitions.

Signed-off-by: Pierangelo Di Pilato <pierdipi@redhat.com>
knative-prow bot pushed a commit that referenced this pull request Sep 26, 2023
When there is high load and multiple consumer group schedule calls,
we get many `dial tcp 10.130.4.8:9092: i/o timeout` errors when
trying to connect to Kafka.
This leads to increased "time to readiness" for consumer groups.

The downside of caching is that, in the case, partitions
increase while the result is cached we won't initialize
the offsets of the new partitions.

Signed-off-by: Pierangelo Di Pilato <pierdipi@redhat.com>
Cali0707 added a commit to Cali0707/eventing-kafka-broker that referenced this pull request Sep 26, 2023
* Make SecretSpec field of consumers Auth omitempty (knative-extensions#780)

* Expose init offset and schedule metrics for ConsumerGroup reconciler (knative-extensions#790) (knative-extensions#791)

Signed-off-by: Pierangelo Di Pilato <pierdipi@redhat.com>

* Fix channel finalizer logic (knative-extensions#3295) (knative-extensions#795)

Signed-off-by: Calum Murray <cmurray@redhat.com>
Co-authored-by: Calum Murray <cmurray@redhat.com>
Co-authored-by: Pierangelo Di Pilato <pierdipi@redhat.com>

* [release-v1.10] SRVKE-958: Cache init offsets results (knative-extensions#817)

* Cache init offsets results

When there is high load and multiple consumer group schedule calls,
we get many `dial tcp 10.130.4.8:9092: i/o timeout` errors when
trying to connect to Kafka.
This leads to increased "time to readiness" for consumer groups.

The downside of caching is that, in the case, partitions
increase while the result is cached we won't initialize
the offsets of the new partitions.

Signed-off-by: Pierangelo Di Pilato <pierdipi@redhat.com>

* Add autoscaler leader log patch

Signed-off-by: Pierangelo Di Pilato <pierdipi@redhat.com>

---------

Signed-off-by: Pierangelo Di Pilato <pierdipi@redhat.com>
Co-authored-by: Pierangelo Di Pilato <pierdipi@redhat.com>

* Scheduler handle overcommitted pods (knative-extensions#820)

Signed-off-by: Pierangelo Di Pilato <pierdipi@redhat.com>
Co-authored-by: Pierangelo Di Pilato <pierdipi@redhat.com>

* Set consumer and consumergroups finalizers when creating them (knative-extensions#823)

It is possible that a delete consumer or consumergroup might
be reconciled and never finalized when it is deleted before
the finalizer is set.
This happens because the Knative generated reconciler uses
patch (as opposed to using update) for setting the finalizer
and patch doesn't have any optimistic concurrency controls.

Signed-off-by: Pierangelo Di Pilato <pierdipi@redhat.com>
Co-authored-by: Pierangelo Di Pilato <pierdipi@redhat.com>

* Clean up reserved from resources that have been scheduled (knative-extensions#830)

In a recent testing run, we've noticed we have have a scheduled
`ConsumerGroup` [1] (see placements) being considered having
reserved replicas in a different pod [2].

That makes the scheduler think that there is no space but the
autoscaler says we have enough space to hold every virtual
replica.

[1]
```
$ k describe consumergroups -n ks-multi-ksvc-0 c9ee3490-5b4b-4d11-87af-8cb2219d9fe3
Name:         c9ee3490-5b4b-4d11-87af-8cb2219d9fe3
Namespace:    ks-multi-ksvc-0
...
Status:
  Conditions:
    Last Transition Time:  2023-09-06T19:58:27Z
    Reason:                Autoscaler is disabled
    Status:                True
    Type:                  Autoscaler
    Last Transition Time:  2023-09-06T21:41:13Z
    Status:                True
    Type:                  Consumers
    Last Transition Time:  2023-09-06T19:58:27Z
    Status:                True
    Type:                  ConsumersScheduled
    Last Transition Time:  2023-09-06T21:41:13Z
    Status:                True
    Type:                  Ready
  Observed Generation:     1
  Placements:
    Pod Name:      kafka-source-dispatcher-6
    Vreplicas:     4
    Pod Name:      kafka-source-dispatcher-7
    Vreplicas:     4
  Replicas:        8
  Subscriber Uri:  http://receiver5-2.ks-multi-ksvc-0.svc.cluster.local
Events:            <none>
```

[2]
```
    "ks-multi-ksvc-0/c9ee3490-5b4b-4d11-87af-8cb2219d9fe3": {
      "kafka-source-dispatcher-3": 8
    },
```

Signed-off-by: Pierangelo Di Pilato <pierdipi@redhat.com>
Co-authored-by: Pierangelo Di Pilato <pierdipi@redhat.com>

* Ignore unknown fields in data plane contract (knative-extensions#3335) (knative-extensions#828)

Signed-off-by: Calum Murray <cmurray@redhat.com>

---------

Signed-off-by: Pierangelo Di Pilato <pierdipi@redhat.com>
Signed-off-by: Calum Murray <cmurray@redhat.com>
Co-authored-by: Martin Gencur <mgencur@redhat.com>
Co-authored-by: Matthias Wessendorf <mwessend@redhat.com>
Co-authored-by: Calum Murray <cmurray@redhat.com>
Co-authored-by: OpenShift Cherrypick Robot <openshift-cherrypick-robot@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cla: yes Indicates the PR's author has signed the CLA. lgtm Indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants