-
Notifications
You must be signed in to change notification settings - Fork 706
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create handler for app repositories backend API. #1397
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 👍
return &appRepositoriesHandler{ | ||
config: *config, | ||
kubeappsNamespace: kubeappsNamespace, | ||
clientsetForConfig: clientsetForConfig, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure why this function is within the appRepositoriesHandler
struct? As far as I can see it can be called as a normal method (it doesn't depend on the struct properties).
[edit] Got it, you need this because of the tests, to be able to override it. (A comment here explaining it would be nice?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a // See comment in the struct defn above
as there's a 5-liner explanation there :)
return | ||
} | ||
// TODO(mnelson): validate both required data and request for index | ||
// https://github.com/kubeapps/kubeapps/issues/1330 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's good that we have a place now to validate this data 👍
Description of the change
Fills in the stub implementation of the
Create
handler for the app repository backend APINote: this handler is similar in nature to the helm3 work in the respect that it picks up the in-cluster k8s config and uses that with the user token to access the API. But note that it's structured so that:
BearerToken
is read from the system and created once only when the handler is created,clientsetForConfig
function when testing which just returns the fake. This ensures that external use of this handler will always have the realclientsetForConfig
function, and only (internal) tests can set a dummy function. In this sense, it may be useful to @SimonAlling as a way to structure the handler code to be testable (warning: client-go testing with fake clientsets is pretty verbose, but once the parts are in place, pretty easy to work with - the setup is the painful part :/)Applicable issues
Additional information
I've added a
--kubeapps-namespace
flag which is effectively a feature flag. Without this being set, a response comes back as:FWIW, I need to investigate more on the use of the in-cluster config, as when I did an IRL test with the feature switched on, authing with a cookie for my
kubeapps-operator
user, I see the following unexpected result:I don't think this should stop the PR from landing (since it's behind the extra flag which is not set in the chart yet), but tomorrow I plan to find out (a) why it assumes the user accessing is the internal-tiller-proxy service account (ie. does the in-cluster config have not only the token data, but the service account info set, which needs to be blanked out), and (b) why the token (or otherwise) is resulting in a 403.
Maybe it'll be obvious during review, but I need to run :)