Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[8.11] [Fleet] Fix inability to upgrade agents from 8.10.4 -> 8.11 (#170974) #171039

Merged
merged 6 commits into from
Nov 12, 2023

Commits on Nov 10, 2023

  1. [Fleet] Fix inability to upgrade agents from 8.10.4 -> 8.11 (elastic#…

    …170974)
    
    ## Summary
    
    Closes elastic#169825
    
    This PR adds logic to Fleet's `/api/agents/available_versions` endpoint
    that will ensure we periodically try to fetch from the live product
    versions API at https://www.elastic.co/api/product_versions to make sure
    we have eventual consistency in the list of available agent versions.
    
    Currently, Kibana relies entirely on a static file generated at build
    time from the above API. If the API isn't up-to-date with the latest
    agent version (e.g. kibana completed its build before agent), then that
    build of Kibana will never "see" the corresponding build of agent.
    
    This API endpoint is cached for two hours to prevent overfetching from
    this external API, and from constantly going out to disk to read from
    the agent versions file.
    
    ## To do
    - [x] Update unit tests
    - [x] Consider airgapped environments
    
    ## On airgapped environments
    
    In airgapped environments, we're going to try and fetch from the
    `product_versions` API and that request is going to fail. What we've
    seen happen in some environments is that these requests do not "fail
    fast" and instead wait until a network timeout is reached.
    
    I'd love to avoid that timeout case and somehow detect airgapped
    environments and avoid calling this API at all. However, we don't have a
    great deterministic way to know if someone is in an airgapped
    environment. The best guess I think we can make is by checking whether
    `xpack.fleet.registryUrl` is set to something other than
    `https://epr.elastic.co`. Curious if anyone has thoughts on this.
    
    ## Screenshots
    
    ![image](https://github.com/elastic/kibana/assets/6766512/0906817c-0098-4b67-8791-d06730f450f6)
    
    ![image](https://github.com/elastic/kibana/assets/6766512/59e7c132-f568-470f-b48d-53761ddc2fde)
    
    ![image](https://github.com/elastic/kibana/assets/6766512/986372df-a90f-48c3-ae24-c3012e8f7730)
    
    ## To test
    
    1. Set up Fleet Server + ES + Kibana
    2. Spin up a Fleet Server running Agent v8.11.0
    3. Enroll an agent running v8.10.4 (I used multipass)
    4. Verify the agent can be upgraded from the UI
    
    ---------
    
    Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
    (cherry picked from commit cd909f0)
    
    # Conflicts:
    #	x-pack/plugins/fleet/server/services/agents/versions.ts
    kpollich committed Nov 10, 2023
    Configuration menu
    Copy the full SHA
    4876547 View commit details
    Browse the repository at this point in the history
  2. Configuration menu
    Copy the full SHA
    3959e79 View commit details
    Browse the repository at this point in the history
  3. Configuration menu
    Copy the full SHA
    edd4e96 View commit details
    Browse the repository at this point in the history

Commits on Nov 11, 2023

  1. Configuration menu
    Copy the full SHA
    d191979 View commit details
    Browse the repository at this point in the history
  2. Configuration menu
    Copy the full SHA
    cf0e919 View commit details
    Browse the repository at this point in the history
  3. Configuration menu
    Copy the full SHA
    a2a84d1 View commit details
    Browse the repository at this point in the history